download and install Termux and Termux Widget,add the Termux-Widget to your Home
https://f-droid.org/en/packages/com.termux/
https://f-droid.org/en/packages/com.termux.widget/
Open the Termux install proot-distro,keep try termux-change-repo until you are able to install proot-distro, install Debian on proot-distro ,copy all below and run:
termux-setup-storage
termux-wake-lock
termux-change-repo
pkg install tur-repo
pkg update && pkg upgrade
pkg install git
git clone https://github.com/termux/proot-distro
cd proot-distro
bash install.sh
cd
rm -rf proot-distro
pkg install proot
rm -rf .shortcuts
mkdir .shortcuts
disname='ai311'
user='brian'
echo "proot-distro login $disname" > .shortcuts/ai311.sh
echo "proot-distro login $disname --user $user" > .shortcuts/ai311_brian.sh
echo "proot-distro login $disname --user $user -- bash -- ai.sh" > .shortcuts/ai311_textgen.sh
echo "proot-distro login $disname --user $user -- bash -- vscode.sh" > .shortcuts/ai311_vscode.sh
proot-distro install debian --override-alias $disname
proot-distro login $disname
now you are in the Debian as root,create a user and set password and something for beginning,don't forget you password. copy all below and run :
user='brian'
apt update && apt upgrade
apt install sudo locales
echo "LANG=zh_CN.UTF-8" >> /etc/locale.conf
sudo locale-gen
adduser $user
gpasswd -a $user sudo
echo "$user ALL=(ALL:ALL) ALL" >> /etc/sudoers
login $user
user='brian'
echo 'source /etc/profile.d/termux-proot.sh' >> .bashrc
sudo locale-gen zh_CN.UTF-8
sudo apt update && apt upgrade
sudo apt install python3-full git curl vim wget
sudo apt install python-is-python3
sudo apt install python3-pip
sudo wget -O- https://aka.ms/install-vscode-server/setup.sh | sh
echo 'code-server serve-local' > vscode.sh
python -m venv textgen
source textgen/bin/activate
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
python download-model.py gpt2
python download-model.py arun-shankar/ChatGPT-Neo
deactivate
cd
echo "source textgen/bin/activate && cd text-generation-webui && python server.py --cpu --cpu-memory 4 --extensions api --notebook --auto-devices --model-menu --verbose" > ai.sh
bash ai.sh
backup and restore is impotent,save lots of time.
proot-distro backup ai311 --output /sdcard/bak/ai311.tar.gz
proot-distro restore /sdcard/bak/ai311.tar.gz
Packages for machine learning, install these if something not working
sudo apt install python3-opencv
sudo apt install python3-torch python3-torchaudio python3-torchtext python3-torchvision
sudo apt install python3-keras-applications python3-keras-preprocessing
sudo apt install python3-psutil python3-pandas python3-numpy python3-fastapi python3-quart python3-pydantic python3-numba python3-ipykernel python3-ipyparallel python3-ipython
sudo apt install python3-pyopencl python3-sentencepiece python3-av python3-scipy python3-sklearn
more information visit the sites below:
https://github.com/oobabooga/text-generation-webui
https://huggingface.co/
Using Termux API in Debian, install before you use this.
https://f-droid.org/en/packages/com.termux.api/
The installation method described above already includes the configuration to possible to call Termux API. The code below opens a link using webbrowser in Pydroid3, and Termux API if executed in Termux.
if __name__ == "__main__":
import os
import platform
import webbrowser
host = "localhost"
port = 8000
uri = f"http://{host}:{port}/"
pv = platform.python_version()
if pv.startswith("3.9.7"):
webbrowser.open(uri)
else:
os.system(f"termux-open {uri}")
import uvicorn
uvicorn.run(app, host=host, port=port)
While rewriting an old post, I discovered something new that also works on the Debian environment I use. It's really impressive that I was able to successfully use Falcon-7B, Stablelm-7B, Vicuna-13B, and Wizard-13B on my Android phone and the speed was acceptable. These models are already powerful enough, as running 1B models on a phone was very slow before. Now, it's within an acceptable range, and using 30B models or higher on a PC should be easy. With these powerful models, many interesting ideas can be realized right away. 😁 Don't just copy and paste large sections, you must create it yourself in order to truly understand. This time, I also started to delve into it and wrote a simple command line interface by myself as a foundation for testing and improvement. https://youtu.be/eH_QmosS_Wo
Experience using the wizard-vicuna-13b-superhot-8k model on my Android phone: Successfully loaded the vicuna-33B model in Windows Subsystem using Llama_cpp_python, but without GPU acceleration it is very slow, 10-20 times slower than on an Android phone.
沒有留言:
發佈留言