LinearBotLib
LinearBotLib
Install / Use
/learn @JouriM66/LinearBotLibREADME
LinearBotLib
Library based on aiogram to simplify develop Telegram bots.
Requirements
Library: aiogram
Goals
Allow user to programm bot logic in usual, linear way without fighting with asyncronous telegram<->API conversation model.
class Logic(ILogic):
async def main(self, chat: BotChat, params: str) -> None:
chat.user().name = chat.last.from_user.full_name
name = chat.user().name
await logic_CALC(chat, name)
if params:
pstr = f'\nYou started me with parameters *"{escape_md(params)}"*, but I dont support any 😷\n\n'
else:
pstr = ''
titleMsg = await chat.reply(
f'Hi, *{name}*.\n'
f'{pstr}'
f'You are at examples section',
media='data/Icon-Hi.png'
)
while True:
rc = await chat.menu(
'Choose test group to go',
[[('➡ Menu tests...', 'menu')],
[('❓ Some asking', 'ask'), ('✌ Funny one :)', 'wait'), ('🍱', 'calc')],
[('❌ Close', 0), ('❌ Cancel', 0), ('❎ Abandon!', 0), ('➰ F* off!!', 0)],
],
remove_unused=True
)
if not rc.known: break
if rc.data == 'menu':
await logic_MENU(chat, name)
elif rc.data == 'ask':
await logic_ASK(chat, name)
elif rc.data == 'wait':
await logic_WAIT(chat, name)
elif rc.data == 'calc':
await logic_CALC(chat, name)
else:
break
await titleMsg.delete()
await chat.say(f'Calm down mate!\nIts all done already.\nSee you 👋', wait_delay=1)
await chat.say(f'...btw, if you wanna reply you can use "/start" command.', wait_delay=2)
await chat.say(f'Just saying...')
Usage
- Rename
token.api.templatetotoken.api - Edit
token.apiand put your API key tokey=value - Run main.py
- Connect to your bot and see all examples by yourself
License
Fully free to use, modify and whatever
No responsibility tho
Related Skills
node-connect
349.7kDiagnose OpenClaw node connection and pairing failures for Android, iOS, and macOS companion apps
claude-opus-4-5-migration
109.7kMigrate prompts and code from Claude Sonnet 4.0, Sonnet 4.5, or Opus 4.1 to Opus 4.5
frontend-design
109.7kCreate distinctive, production-grade frontend interfaces with high design quality. Use this skill when the user asks to build web components, pages, or applications. Generates creative, polished code that avoids generic AI aesthetics.
model-usage
349.7kUse CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
