Problem installing python package from a wheel #bitbake #python
David Babich
Hi, https://forums.developer.nvidia.com/t/pytorch-for-jetson-version-1-10-now-available/72048 ``` DESCRIPTION = "NVIDIA's Python Torch" HOMEPAGE = "https://nvidia.com" LICENSE = "BSD-3-Clause" LIC_FILES_CHKSUM = "file://../LICENSE;md5=91a5dfdaccf53b27488cb3a639e986d5"
inherit setuptools3
SRC_URI = "\ file://torch-1.10.0-cp36-cp36m-linux_aarch64.whl \ file://LICENSE \ "
COMPATIBLE_MACHINE = "jetson-tx2-devkit-tx2i" PACKAGE_ARCH = "${MACHINE_ARCH}"
S = "${WORKDIR}/${PN}-${PV}"
do_configure() { : }
do_compile() { : }
do_install() { pip3 install ${WORKDIR}/torch-1.10.0-cp36-cp36m-linux_aarch64.whl }
DEPENDS = "python3-pip-native" ``` | WARNING: The directory '/home/ddbabich/.cache/pip' or its parent directory is not owned or is not writable by the current user. The cache has been disabled. Check the permissions and owner of that directory. If executing pip with sudo, you should use sudo's -H flag. | ERROR: torch-1.10.0-cp36-cp36m-linux_aarch64.whl is not a supported wheel on this platform. It seems like I'm missing something with the host vs. the target settings? But I really don't have any ideas. Any help is appreciated. THanks -David
|
|
David Babich
I made it a little further by adding --no-cache-dir to the pip3 install command. That got rid fo the warning about not being able to access the .cache/pip. However I still have the error: | ERROR: torch-1.10.0-cp36-cp36m-linux_ |
|
On Mon, Nov 22, 2021 at 2:54 PM David Babich <ddbabich@...> wrote:
Installing third-party wheels is not something we are likely to ever support in Yocto Project/OpenEmbedded recipes. Are you trying to install using pip3 on target? Note that many factors will make it tricky for python wheels with binary content (C or Rust extensions). The python3 version must match, as will the libraries it requires. The wheel you listed was built for Python 3.6 (cp36) and ARM v8 (aarch64). The error is what you would see if you were trying to install an aarch64 wheel on an x86-64 target, but other reasons could lead to that error. We don't know what version of glibc, gcc, etc. was used and whether those are going to be compatible. Also, when asking questions, please tell us which release of Yocto Project you are using, what the MACHINE you are building for is, which layers you are using (and at what release) and other information to help us help you. Cheers, --Tim |
|
Nicolas Jeker
On Wed, 2021-11-24 at 09:55 -0800, Tim Orling wrote:
There's a section about building from source with a patch in the article he linked with his first message. I don't know much about python in yocto, but maybe doing that in a recipe could work? Also, when asking questions, please tell us which release of Yocto |
|
David Babich
On Thu, Nov 25, 2021 at 02:45 AM, Nicolas Jeker wrote:
On Wed, 2021-11-24 at 09:55 -0800, Tim Orling wrote:Ah OK, I wasn't aware of the the python naming convention. That is likely my problem since I'm using Honister which uses Python 3.9. I pulled the wheel from NVIDIA's forums for pytorch, unfortunately they've not released one for Python 3.9 so I will likely have to create it myself using the build from scratch method described in the article I linked. Unfortunately this will likely open a new can of worms... There's a section about building from source with a patch in the I'm using Honister and the machine is 'jetson-tx2-dev-kit-tx2i', I'm making a custom distro based on the meta-tegra-demo from this: A large part of the problem that I have is that many of these custom packages don't provide a nice .tar.gz pypi source distribution that I could use with the pypi class. My target install ends up on a spacecraft, so there is a strong desire to have a fully managed build that can just be flashed onto the target tx2i without the need to do any post installation or configuration. Sadly I'm finding that a lot of these third party dependencies do not lend themselves well to the yocto paradigm. I'm thinking that I may have to setup the target board, install the third party packages on the target using whatever is required to do that, then copy the build products back to the host and use a bitbake recipe to just do_install:append the built products into the rootfs during the yocto build. Does that sound feasible? I've not had to do this before, but it seems like it might be a reasonable approach given the complexity of the situation. Thanks -David
|
|