add infermatic-text and whois plugins for AI text generation and WHOIS lookups
This commit is contained in:
+44
-1
@@ -77,6 +77,23 @@ async def handle_command(room, message, bot, prefix, config):
|
||||
<p>Fetches the current Bitcoin price in USD from bitcointicker.co API. Shows real-time BTC/USD price with proper formatting. Includes error handling for API timeouts and data parsing issues.</p>
|
||||
</details>
|
||||
|
||||
<details><summary>🌐 <strong>!whois <domain/ip></strong></summary>
|
||||
<p>Perform comprehensive WHOIS lookups for domains and IP addresses. Retrieves registrar information, registration dates, name servers, and contact details from WHOIS databases.</p>
|
||||
<p><strong>Usage:</strong></p>
|
||||
<ul>
|
||||
<li><code>!whois <domain></code> - Query domain registration information</li>
|
||||
<li><code>!whois <ip></code> - Query IP address allocation details</li>
|
||||
</ul>
|
||||
<p><strong>Examples:</strong></p>
|
||||
<ul>
|
||||
<li><code>!whois example.com</code></li>
|
||||
<li><code>!whois google.com</code></li>
|
||||
<li><code>!whois 8.8.8.8</code></li>
|
||||
<li><code>!whois 1.1.1.1</code></li>
|
||||
</ul>
|
||||
<p><strong>Output includes:</strong> Domain/IP information, registrar, WHOIS server, creation/expiration dates, name servers, and contact details.</p>
|
||||
</details>
|
||||
|
||||
<details><summary>🔍 <strong>!shodan [command] [query]</strong></summary>
|
||||
<p>Shodan.io integration for security reconnaissance and threat intelligence.</p>
|
||||
<p><strong>Commands:</strong></p>
|
||||
@@ -290,7 +307,33 @@ Search Exploit-DB for security vulnerabilities and exploits. Returns detailed in
|
||||
</details>
|
||||
|
||||
<details><summary>📄 <strong>!text [prompt]</strong></summary>
|
||||
<p>Generates text using Ollama's Mistral 7B Instruct model. Options: --max_tokens, --temperature. Uses queuing system for sequential processing.</p>
|
||||
<p>Generates text using the Infermatic AI API. Supports multiple models, configurable parameters, and model listing. Uses queuing system for sequential processing.</p>
|
||||
<p><strong>Usage:</strong></p>
|
||||
<ul>
|
||||
<li><code>!text <prompt></code> - Generate text using the default model</li>
|
||||
<li><code>!text --list-models</code> - List all available models from Infermatic AI</li>
|
||||
<li><code>!text --use-model <model_name> <prompt></code> - Use a specific model instead of the default</li>
|
||||
<li><code>!text --temperature <value> <prompt></code> - Set temperature (0.0-1.0, default: 0.9)</li>
|
||||
<li><code>!text --max-tokens <value> <prompt></code> - Set maximum tokens to generate (default: 2048)</li>
|
||||
</ul>
|
||||
<p><strong>Configuration:</strong></p>
|
||||
<ul>
|
||||
<li>Requires <code>INFERMATIC_API</code> environment variable set to your API key</li>
|
||||
<li>Requires <code>INFERMATIC_MODEL</code> environment variable for default model (default: Sao10K-L3.1-70B-Hanami-x1)</li>
|
||||
</ul>
|
||||
<p><strong>Model Management:</strong></p>
|
||||
<ul>
|
||||
<li>Use <code>!text --list-models</code> to see all available models</li>
|
||||
<li>Models support different capabilities and context lengths</li>
|
||||
<li>Costs and token limits vary by model</li>
|
||||
</ul>
|
||||
<p><strong>Examples:</strong></p>
|
||||
<ul>
|
||||
<li><code>!text write a python function to calculate fibonacci</code></li>
|
||||
<li><code>!text --list-models</code></li>
|
||||
<li><code>!text --use-model llama-v3-8b-instruct explain quantum computing</code></li>
|
||||
<li><code>!text --temperature 0.7 --max-tokens 500 write a haiku about AI</code></li>
|
||||
</ul>
|
||||
</details>
|
||||
|
||||
<details><summary>📰 <strong>!xkcd</strong></summary>
|
||||
|
||||
Reference in New Issue
Block a user