http://twgwkbsl36sgd3bcpsnvxaznsfygcp3bt7lk4if3ly2dy5ey7xpx4fqd.onion/p/feeding-nmap-into-largelanguagemodels-for-usable-output
Ive been experimenting with different tools, which i can then use with local large language models (LLLM) for generating useful output. I made a script in python that connects to my local ollama server using the API. import ollama from ollama import Client import sys import subprocess client = Client(host='http://127.0.0.1:11434') query = sys.argv[1] def analyse(query): print( " Processing Command " ) response = client.chat(model='mistral', messages=[{ 'role': 'user', 'content': '''...