May 17 01:35:02.706969 kernel: Booting Linux on physical CPU 0x0000120000 [0x413fd0c1] May 17 01:35:02.706988 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri May 16 23:24:21 -00 2025 May 17 01:35:02.706996 kernel: efi: EFI v2.70 by American Megatrends May 17 01:35:02.707002 kernel: efi: ACPI 2.0=0xec080000 SMBIOS 3.0=0xf0a1ff98 ESRT=0xea465818 RNG=0xebf10018 MEMRESERVE=0xe45cdf98 May 17 01:35:02.707007 kernel: random: crng init done May 17 01:35:02.707013 kernel: esrt: Reserving ESRT space from 0x00000000ea465818 to 0x00000000ea465878. May 17 01:35:02.707019 kernel: ACPI: Early table checksum verification disabled May 17 01:35:02.707024 kernel: ACPI: RSDP 0x00000000EC080000 000024 (v02 Ampere) May 17 01:35:02.707031 kernel: ACPI: XSDT 0x00000000EC070000 0000A4 (v01 Ampere Altra 00000000 AMI 01000013) May 17 01:35:02.707037 kernel: ACPI: FACP 0x00000000EC050000 000114 (v06 Ampere Altra 00000000 INTL 20190509) May 17 01:35:02.707042 kernel: ACPI: DSDT 0x00000000EBFF0000 019B57 (v02 Ampere Jade 00000001 INTL 20200717) May 17 01:35:02.707052 kernel: ACPI: DBG2 0x00000000EC060000 00005C (v00 Ampere Altra 00000000 INTL 20190509) May 17 01:35:02.707057 kernel: ACPI: GTDT 0x00000000EC040000 000110 (v03 Ampere Altra 00000000 INTL 20190509) May 17 01:35:02.707063 kernel: ACPI: SSDT 0x00000000EC030000 00002D (v02 Ampere Altra 00000001 INTL 20190509) May 17 01:35:02.707071 kernel: ACPI: FIDT 0x00000000EBFE0000 00009C (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:35:02.707077 kernel: ACPI: SPCR 0x00000000EBFD0000 000050 (v02 ALASKA A M I 01072009 AMI 0005000F) May 17 01:35:02.707083 kernel: ACPI: BGRT 0x00000000EBFC0000 000038 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:35:02.707088 kernel: ACPI: MCFG 0x00000000EBFB0000 0000AC (v01 Ampere Altra 00000001 AMP. 01000013) May 17 01:35:02.707094 kernel: ACPI: IORT 0x00000000EBFA0000 000610 (v00 Ampere Altra 00000000 AMP. 01000013) May 17 01:35:02.707100 kernel: ACPI: PPTT 0x00000000EBF80000 006E60 (v02 Ampere Altra 00000000 AMP. 01000013) May 17 01:35:02.707105 kernel: ACPI: SLIT 0x00000000EBF70000 00002D (v01 Ampere Altra 00000000 AMP. 01000013) May 17 01:35:02.707111 kernel: ACPI: SRAT 0x00000000EBF60000 0006D0 (v03 Ampere Altra 00000000 AMP. 01000013) May 17 01:35:02.707117 kernel: ACPI: APIC 0x00000000EBF90000 0019F4 (v05 Ampere Altra 00000003 AMI 01000013) May 17 01:35:02.707124 kernel: ACPI: PCCT 0x00000000EBF40000 000576 (v02 Ampere Altra 00000003 AMP. 01000013) May 17 01:35:02.707129 kernel: ACPI: WSMT 0x00000000EBF30000 000028 (v01 ALASKA A M I 01072009 AMI 00010013) May 17 01:35:02.707135 kernel: ACPI: FPDT 0x00000000EBF20000 000044 (v01 ALASKA A M I 01072009 AMI 01000013) May 17 01:35:02.707141 kernel: ACPI: SPCR: console: pl011,mmio32,0x100002600000,115200 May 17 01:35:02.707146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x88300000-0x883fffff] May 17 01:35:02.707152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x90000000-0xffffffff] May 17 01:35:02.707157 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0x8007fffffff] May 17 01:35:02.707163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80100000000-0x83fffffffff] May 17 01:35:02.707169 kernel: NUMA: NODE_DATA [mem 0x83fdffcf900-0x83fdffd4fff] May 17 01:35:02.707174 kernel: Zone ranges: May 17 01:35:02.707180 kernel: DMA [mem 0x0000000088300000-0x00000000ffffffff] May 17 01:35:02.707186 kernel: DMA32 empty May 17 01:35:02.707192 kernel: Normal [mem 0x0000000100000000-0x0000083fffffffff] May 17 01:35:02.707198 kernel: Movable zone start for each node May 17 01:35:02.707204 kernel: Early memory node ranges May 17 01:35:02.707209 kernel: node 0: [mem 0x0000000088300000-0x00000000883fffff] May 17 01:35:02.707215 kernel: node 0: [mem 0x0000000090000000-0x0000000091ffffff] May 17 01:35:02.707223 kernel: node 0: [mem 0x0000000092000000-0x0000000093ffffff] May 17 01:35:02.707229 kernel: node 0: [mem 0x0000000094000000-0x00000000eba36fff] May 17 01:35:02.707236 kernel: node 0: [mem 0x00000000eba37000-0x00000000ebeadfff] May 17 01:35:02.707242 kernel: node 0: [mem 0x00000000ebeae000-0x00000000ebeaefff] May 17 01:35:02.707248 kernel: node 0: [mem 0x00000000ebeaf000-0x00000000ebeccfff] May 17 01:35:02.707254 kernel: node 0: [mem 0x00000000ebecd000-0x00000000ebecdfff] May 17 01:35:02.707260 kernel: node 0: [mem 0x00000000ebece000-0x00000000ebecffff] May 17 01:35:02.707266 kernel: node 0: [mem 0x00000000ebed0000-0x00000000ec0effff] May 17 01:35:02.707272 kernel: node 0: [mem 0x00000000ec0f0000-0x00000000ec0fffff] May 17 01:35:02.707277 kernel: node 0: [mem 0x00000000ec100000-0x00000000ee54ffff] May 17 01:35:02.707283 kernel: node 0: [mem 0x00000000ee550000-0x00000000f765ffff] May 17 01:35:02.707290 kernel: node 0: [mem 0x00000000f7660000-0x00000000f784ffff] May 17 01:35:02.707296 kernel: node 0: [mem 0x00000000f7850000-0x00000000f7fdffff] May 17 01:35:02.707301 kernel: node 0: [mem 0x00000000f7fe0000-0x00000000ffc8efff] May 17 01:35:02.707307 kernel: node 0: [mem 0x00000000ffc8f000-0x00000000ffc8ffff] May 17 01:35:02.707313 kernel: node 0: [mem 0x00000000ffc90000-0x00000000ffffffff] May 17 01:35:02.707319 kernel: node 0: [mem 0x0000080000000000-0x000008007fffffff] May 17 01:35:02.707325 kernel: node 0: [mem 0x0000080100000000-0x0000083fffffffff] May 17 01:35:02.707331 kernel: Initmem setup node 0 [mem 0x0000000088300000-0x0000083fffffffff] May 17 01:35:02.707337 kernel: On node 0, zone DMA: 768 pages in unavailable ranges May 17 01:35:02.707343 kernel: On node 0, zone DMA: 31744 pages in unavailable ranges May 17 01:35:02.707349 kernel: psci: probing for conduit method from ACPI. May 17 01:35:02.707354 kernel: psci: PSCIv1.1 detected in firmware. May 17 01:35:02.707361 kernel: psci: Using standard PSCI v0.2 function IDs May 17 01:35:02.707368 kernel: psci: MIGRATE_INFO_TYPE not supported. May 17 01:35:02.707373 kernel: psci: SMC Calling Convention v1.2 May 17 01:35:02.707379 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 May 17 01:35:02.707385 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100 -> Node 0 May 17 01:35:02.707391 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10000 -> Node 0 May 17 01:35:02.707397 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x10100 -> Node 0 May 17 01:35:02.707403 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20000 -> Node 0 May 17 01:35:02.707409 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x20100 -> Node 0 May 17 01:35:02.707415 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30000 -> Node 0 May 17 01:35:02.707421 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x30100 -> Node 0 May 17 01:35:02.707427 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40000 -> Node 0 May 17 01:35:02.707434 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x40100 -> Node 0 May 17 01:35:02.707440 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50000 -> Node 0 May 17 01:35:02.707446 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x50100 -> Node 0 May 17 01:35:02.707451 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60000 -> Node 0 May 17 01:35:02.707457 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x60100 -> Node 0 May 17 01:35:02.707463 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70000 -> Node 0 May 17 01:35:02.707469 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x70100 -> Node 0 May 17 01:35:02.707475 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80000 -> Node 0 May 17 01:35:02.707481 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x80100 -> Node 0 May 17 01:35:02.707487 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90000 -> Node 0 May 17 01:35:02.707493 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x90100 -> Node 0 May 17 01:35:02.707499 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0000 -> Node 0 May 17 01:35:02.707506 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xa0100 -> Node 0 May 17 01:35:02.707512 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0000 -> Node 0 May 17 01:35:02.707517 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xb0100 -> Node 0 May 17 01:35:02.707523 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0000 -> Node 0 May 17 01:35:02.707529 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xc0100 -> Node 0 May 17 01:35:02.707535 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0000 -> Node 0 May 17 01:35:02.707541 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xd0100 -> Node 0 May 17 01:35:02.707547 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0000 -> Node 0 May 17 01:35:02.707553 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xe0100 -> Node 0 May 17 01:35:02.707559 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0000 -> Node 0 May 17 01:35:02.707564 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0xf0100 -> Node 0 May 17 01:35:02.707571 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100000 -> Node 0 May 17 01:35:02.707577 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x100100 -> Node 0 May 17 01:35:02.707583 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110000 -> Node 0 May 17 01:35:02.707589 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x110100 -> Node 0 May 17 01:35:02.707595 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120000 -> Node 0 May 17 01:35:02.707601 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x120100 -> Node 0 May 17 01:35:02.707607 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130000 -> Node 0 May 17 01:35:02.707613 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x130100 -> Node 0 May 17 01:35:02.707619 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140000 -> Node 0 May 17 01:35:02.707625 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x140100 -> Node 0 May 17 01:35:02.707630 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150000 -> Node 0 May 17 01:35:02.707637 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x150100 -> Node 0 May 17 01:35:02.707644 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160000 -> Node 0 May 17 01:35:02.707650 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x160100 -> Node 0 May 17 01:35:02.707656 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170000 -> Node 0 May 17 01:35:02.707662 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x170100 -> Node 0 May 17 01:35:02.707668 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180000 -> Node 0 May 17 01:35:02.707673 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x180100 -> Node 0 May 17 01:35:02.707679 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190000 -> Node 0 May 17 01:35:02.707685 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x190100 -> Node 0 May 17 01:35:02.707697 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0000 -> Node 0 May 17 01:35:02.707703 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1a0100 -> Node 0 May 17 01:35:02.707711 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0000 -> Node 0 May 17 01:35:02.707717 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1b0100 -> Node 0 May 17 01:35:02.707723 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0000 -> Node 0 May 17 01:35:02.707729 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1c0100 -> Node 0 May 17 01:35:02.707736 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0000 -> Node 0 May 17 01:35:02.707742 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1d0100 -> Node 0 May 17 01:35:02.707750 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0000 -> Node 0 May 17 01:35:02.707756 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1e0100 -> Node 0 May 17 01:35:02.707762 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0000 -> Node 0 May 17 01:35:02.707768 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1f0100 -> Node 0 May 17 01:35:02.707775 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200000 -> Node 0 May 17 01:35:02.707781 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x200100 -> Node 0 May 17 01:35:02.707787 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210000 -> Node 0 May 17 01:35:02.707793 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x210100 -> Node 0 May 17 01:35:02.707800 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220000 -> Node 0 May 17 01:35:02.707806 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x220100 -> Node 0 May 17 01:35:02.707812 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230000 -> Node 0 May 17 01:35:02.707819 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x230100 -> Node 0 May 17 01:35:02.707826 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240000 -> Node 0 May 17 01:35:02.707832 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x240100 -> Node 0 May 17 01:35:02.707838 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250000 -> Node 0 May 17 01:35:02.707844 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x250100 -> Node 0 May 17 01:35:02.707851 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260000 -> Node 0 May 17 01:35:02.707857 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x260100 -> Node 0 May 17 01:35:02.707863 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270000 -> Node 0 May 17 01:35:02.707870 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x270100 -> Node 0 May 17 01:35:02.707876 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 17 01:35:02.707882 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 17 01:35:02.707889 kernel: pcpu-alloc: [0] 00 [0] 01 [0] 02 [0] 03 [0] 04 [0] 05 [0] 06 [0] 07 May 17 01:35:02.707896 kernel: pcpu-alloc: [0] 08 [0] 09 [0] 10 [0] 11 [0] 12 [0] 13 [0] 14 [0] 15 May 17 01:35:02.707903 kernel: pcpu-alloc: [0] 16 [0] 17 [0] 18 [0] 19 [0] 20 [0] 21 [0] 22 [0] 23 May 17 01:35:02.707909 kernel: pcpu-alloc: [0] 24 [0] 25 [0] 26 [0] 27 [0] 28 [0] 29 [0] 30 [0] 31 May 17 01:35:02.707915 kernel: pcpu-alloc: [0] 32 [0] 33 [0] 34 [0] 35 [0] 36 [0] 37 [0] 38 [0] 39 May 17 01:35:02.707922 kernel: pcpu-alloc: [0] 40 [0] 41 [0] 42 [0] 43 [0] 44 [0] 45 [0] 46 [0] 47 May 17 01:35:02.707928 kernel: pcpu-alloc: [0] 48 [0] 49 [0] 50 [0] 51 [0] 52 [0] 53 [0] 54 [0] 55 May 17 01:35:02.707934 kernel: pcpu-alloc: [0] 56 [0] 57 [0] 58 [0] 59 [0] 60 [0] 61 [0] 62 [0] 63 May 17 01:35:02.707940 kernel: pcpu-alloc: [0] 64 [0] 65 [0] 66 [0] 67 [0] 68 [0] 69 [0] 70 [0] 71 May 17 01:35:02.707947 kernel: pcpu-alloc: [0] 72 [0] 73 [0] 74 [0] 75 [0] 76 [0] 77 [0] 78 [0] 79 May 17 01:35:02.707953 kernel: Detected PIPT I-cache on CPU0 May 17 01:35:02.707959 kernel: CPU features: detected: GIC system register CPU interface May 17 01:35:02.707967 kernel: CPU features: detected: Virtualization Host Extensions May 17 01:35:02.707973 kernel: CPU features: detected: Hardware dirty bit management May 17 01:35:02.707979 kernel: CPU features: detected: Spectre-v4 May 17 01:35:02.707985 kernel: CPU features: detected: Spectre-BHB May 17 01:35:02.707992 kernel: CPU features: kernel page table isolation forced ON by KASLR May 17 01:35:02.707998 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 17 01:35:02.708004 kernel: CPU features: detected: ARM erratum 1418040 May 17 01:35:02.708011 kernel: CPU features: detected: SSBS not fully self-synchronizing May 17 01:35:02.708017 kernel: alternatives: patching kernel code May 17 01:35:02.708023 kernel: Built 1 zonelists, mobility grouping on. Total pages: 65996028 May 17 01:35:02.708030 kernel: Policy zone: Normal May 17 01:35:02.708038 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 01:35:02.708047 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 01:35:02.708054 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes May 17 01:35:02.708060 kernel: printk: log_buf_len total cpu_extra contributions: 323584 bytes May 17 01:35:02.708066 kernel: printk: log_buf_len min size: 262144 bytes May 17 01:35:02.708072 kernel: printk: log_buf_len: 1048576 bytes May 17 01:35:02.708079 kernel: printk: early log buf free: 249816(95%) May 17 01:35:02.708085 kernel: Dentry cache hash table entries: 16777216 (order: 15, 134217728 bytes, linear) May 17 01:35:02.708092 kernel: Inode-cache hash table entries: 8388608 (order: 14, 67108864 bytes, linear) May 17 01:35:02.708098 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 01:35:02.708104 kernel: software IO TLB: mapped [mem 0x00000000fbc8f000-0x00000000ffc8f000] (64MB) May 17 01:35:02.708112 kernel: Memory: 262927068K/268174336K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 5247268K reserved, 0K cma-reserved) May 17 01:35:02.708119 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=80, Nodes=1 May 17 01:35:02.708125 kernel: trace event string verifier disabled May 17 01:35:02.708131 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 01:35:02.708138 kernel: rcu: RCU event tracing is enabled. May 17 01:35:02.708145 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=80. May 17 01:35:02.708151 kernel: Trampoline variant of Tasks RCU enabled. May 17 01:35:02.708158 kernel: Tracing variant of Tasks RCU enabled. May 17 01:35:02.708164 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 01:35:02.708171 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=80 May 17 01:35:02.708177 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 17 01:35:02.708185 kernel: GICv3: GIC: Using split EOI/Deactivate mode May 17 01:35:02.708191 kernel: GICv3: 672 SPIs implemented May 17 01:35:02.708197 kernel: GICv3: 0 Extended SPIs implemented May 17 01:35:02.708204 kernel: GICv3: Distributor has no Range Selector support May 17 01:35:02.708210 kernel: Root IRQ handler: gic_handle_irq May 17 01:35:02.708216 kernel: GICv3: 16 PPIs implemented May 17 01:35:02.708222 kernel: GICv3: CPU0: found redistributor 120000 region 0:0x00001001005c0000 May 17 01:35:02.708229 kernel: SRAT: PXM 0 -> ITS 0 -> Node 0 May 17 01:35:02.708235 kernel: SRAT: PXM 0 -> ITS 1 -> Node 0 May 17 01:35:02.708241 kernel: SRAT: PXM 0 -> ITS 2 -> Node 0 May 17 01:35:02.708247 kernel: SRAT: PXM 0 -> ITS 3 -> Node 0 May 17 01:35:02.708253 kernel: SRAT: PXM 0 -> ITS 4 -> Node 0 May 17 01:35:02.708261 kernel: SRAT: PXM 0 -> ITS 5 -> Node 0 May 17 01:35:02.708267 kernel: SRAT: PXM 0 -> ITS 6 -> Node 0 May 17 01:35:02.708273 kernel: SRAT: PXM 0 -> ITS 7 -> Node 0 May 17 01:35:02.708280 kernel: ITS [mem 0x100100040000-0x10010005ffff] May 17 01:35:02.708286 kernel: ITS@0x0000100100040000: allocated 8192 Devices @80000200000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708293 kernel: ITS@0x0000100100040000: allocated 32768 Interrupt Collections @80000210000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708299 kernel: ITS [mem 0x100100060000-0x10010007ffff] May 17 01:35:02.708306 kernel: ITS@0x0000100100060000: allocated 8192 Devices @80000230000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708312 kernel: ITS@0x0000100100060000: allocated 32768 Interrupt Collections @80000240000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708318 kernel: ITS [mem 0x100100080000-0x10010009ffff] May 17 01:35:02.708325 kernel: ITS@0x0000100100080000: allocated 8192 Devices @80000260000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708331 kernel: ITS@0x0000100100080000: allocated 32768 Interrupt Collections @80000270000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708339 kernel: ITS [mem 0x1001000a0000-0x1001000bffff] May 17 01:35:02.708345 kernel: ITS@0x00001001000a0000: allocated 8192 Devices @80000290000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708352 kernel: ITS@0x00001001000a0000: allocated 32768 Interrupt Collections @800002a0000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708358 kernel: ITS [mem 0x1001000c0000-0x1001000dffff] May 17 01:35:02.708364 kernel: ITS@0x00001001000c0000: allocated 8192 Devices @800002c0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708371 kernel: ITS@0x00001001000c0000: allocated 32768 Interrupt Collections @800002d0000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708377 kernel: ITS [mem 0x1001000e0000-0x1001000fffff] May 17 01:35:02.708383 kernel: ITS@0x00001001000e0000: allocated 8192 Devices @800002f0000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708390 kernel: ITS@0x00001001000e0000: allocated 32768 Interrupt Collections @80000300000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708396 kernel: ITS [mem 0x100100100000-0x10010011ffff] May 17 01:35:02.708402 kernel: ITS@0x0000100100100000: allocated 8192 Devices @80000320000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708410 kernel: ITS@0x0000100100100000: allocated 32768 Interrupt Collections @80000330000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708416 kernel: ITS [mem 0x100100120000-0x10010013ffff] May 17 01:35:02.708423 kernel: ITS@0x0000100100120000: allocated 8192 Devices @80000350000 (indirect, esz 8, psz 64K, shr 1) May 17 01:35:02.708429 kernel: ITS@0x0000100100120000: allocated 32768 Interrupt Collections @80000360000 (flat, esz 2, psz 64K, shr 1) May 17 01:35:02.708435 kernel: GICv3: using LPI property table @0x0000080000370000 May 17 01:35:02.708442 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000080000380000 May 17 01:35:02.708448 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708455 kernel: ACPI GTDT: found 1 memory-mapped timer block(s). May 17 01:35:02.708461 kernel: arch_timer: cp15 and mmio timer(s) running at 25.00MHz (phys/phys). May 17 01:35:02.708467 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 17 01:35:02.708474 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 17 01:35:02.708481 kernel: Console: colour dummy device 80x25 May 17 01:35:02.708488 kernel: printk: console [tty0] enabled May 17 01:35:02.708494 kernel: ACPI: Core revision 20210730 May 17 01:35:02.708501 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 17 01:35:02.708508 kernel: pid_max: default: 81920 minimum: 640 May 17 01:35:02.708514 kernel: LSM: Security Framework initializing May 17 01:35:02.708520 kernel: SELinux: Initializing. May 17 01:35:02.708527 kernel: Mount-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:35:02.708534 kernel: Mountpoint-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:35:02.708541 kernel: rcu: Hierarchical SRCU implementation. May 17 01:35:02.708548 kernel: Platform MSI: ITS@0x100100040000 domain created May 17 01:35:02.708555 kernel: Platform MSI: ITS@0x100100060000 domain created May 17 01:35:02.708561 kernel: Platform MSI: ITS@0x100100080000 domain created May 17 01:35:02.708567 kernel: Platform MSI: ITS@0x1001000a0000 domain created May 17 01:35:02.708574 kernel: Platform MSI: ITS@0x1001000c0000 domain created May 17 01:35:02.708580 kernel: Platform MSI: ITS@0x1001000e0000 domain created May 17 01:35:02.708587 kernel: Platform MSI: ITS@0x100100100000 domain created May 17 01:35:02.708593 kernel: Platform MSI: ITS@0x100100120000 domain created May 17 01:35:02.708600 kernel: PCI/MSI: ITS@0x100100040000 domain created May 17 01:35:02.708607 kernel: PCI/MSI: ITS@0x100100060000 domain created May 17 01:35:02.708613 kernel: PCI/MSI: ITS@0x100100080000 domain created May 17 01:35:02.708620 kernel: PCI/MSI: ITS@0x1001000a0000 domain created May 17 01:35:02.708626 kernel: PCI/MSI: ITS@0x1001000c0000 domain created May 17 01:35:02.708633 kernel: PCI/MSI: ITS@0x1001000e0000 domain created May 17 01:35:02.708639 kernel: PCI/MSI: ITS@0x100100100000 domain created May 17 01:35:02.708646 kernel: PCI/MSI: ITS@0x100100120000 domain created May 17 01:35:02.708652 kernel: Remapping and enabling EFI services. May 17 01:35:02.708658 kernel: smp: Bringing up secondary CPUs ... May 17 01:35:02.708666 kernel: Detected PIPT I-cache on CPU1 May 17 01:35:02.708672 kernel: GICv3: CPU1: found redistributor 1a0000 region 0:0x00001001007c0000 May 17 01:35:02.708679 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000080000390000 May 17 01:35:02.708685 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708692 kernel: CPU1: Booted secondary processor 0x00001a0000 [0x413fd0c1] May 17 01:35:02.708698 kernel: Detected PIPT I-cache on CPU2 May 17 01:35:02.708705 kernel: GICv3: CPU2: found redistributor 140000 region 0:0x0000100100640000 May 17 01:35:02.708711 kernel: GICv3: CPU2: using allocated LPI pending table @0x00000800003a0000 May 17 01:35:02.708718 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708725 kernel: CPU2: Booted secondary processor 0x0000140000 [0x413fd0c1] May 17 01:35:02.708732 kernel: Detected PIPT I-cache on CPU3 May 17 01:35:02.708738 kernel: GICv3: CPU3: found redistributor 1c0000 region 0:0x0000100100840000 May 17 01:35:02.708745 kernel: GICv3: CPU3: using allocated LPI pending table @0x00000800003b0000 May 17 01:35:02.708751 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708758 kernel: CPU3: Booted secondary processor 0x00001c0000 [0x413fd0c1] May 17 01:35:02.708764 kernel: Detected PIPT I-cache on CPU4 May 17 01:35:02.708771 kernel: GICv3: CPU4: found redistributor 100000 region 0:0x0000100100540000 May 17 01:35:02.708777 kernel: GICv3: CPU4: using allocated LPI pending table @0x00000800003c0000 May 17 01:35:02.708784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708791 kernel: CPU4: Booted secondary processor 0x0000100000 [0x413fd0c1] May 17 01:35:02.708797 kernel: Detected PIPT I-cache on CPU5 May 17 01:35:02.708804 kernel: GICv3: CPU5: found redistributor 180000 region 0:0x0000100100740000 May 17 01:35:02.708810 kernel: GICv3: CPU5: using allocated LPI pending table @0x00000800003d0000 May 17 01:35:02.708817 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708823 kernel: CPU5: Booted secondary processor 0x0000180000 [0x413fd0c1] May 17 01:35:02.708830 kernel: Detected PIPT I-cache on CPU6 May 17 01:35:02.708836 kernel: GICv3: CPU6: found redistributor 160000 region 0:0x00001001006c0000 May 17 01:35:02.708843 kernel: GICv3: CPU6: using allocated LPI pending table @0x00000800003e0000 May 17 01:35:02.708850 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708857 kernel: CPU6: Booted secondary processor 0x0000160000 [0x413fd0c1] May 17 01:35:02.708863 kernel: Detected PIPT I-cache on CPU7 May 17 01:35:02.708869 kernel: GICv3: CPU7: found redistributor 1e0000 region 0:0x00001001008c0000 May 17 01:35:02.708876 kernel: GICv3: CPU7: using allocated LPI pending table @0x00000800003f0000 May 17 01:35:02.708882 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708888 kernel: CPU7: Booted secondary processor 0x00001e0000 [0x413fd0c1] May 17 01:35:02.708895 kernel: Detected PIPT I-cache on CPU8 May 17 01:35:02.708901 kernel: GICv3: CPU8: found redistributor a0000 region 0:0x00001001003c0000 May 17 01:35:02.708908 kernel: GICv3: CPU8: using allocated LPI pending table @0x0000080000800000 May 17 01:35:02.708915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708922 kernel: CPU8: Booted secondary processor 0x00000a0000 [0x413fd0c1] May 17 01:35:02.708928 kernel: Detected PIPT I-cache on CPU9 May 17 01:35:02.708934 kernel: GICv3: CPU9: found redistributor 220000 region 0:0x00001001009c0000 May 17 01:35:02.708941 kernel: GICv3: CPU9: using allocated LPI pending table @0x0000080000810000 May 17 01:35:02.708947 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708954 kernel: CPU9: Booted secondary processor 0x0000220000 [0x413fd0c1] May 17 01:35:02.708960 kernel: Detected PIPT I-cache on CPU10 May 17 01:35:02.708966 kernel: GICv3: CPU10: found redistributor c0000 region 0:0x0000100100440000 May 17 01:35:02.708974 kernel: GICv3: CPU10: using allocated LPI pending table @0x0000080000820000 May 17 01:35:02.708980 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.708987 kernel: CPU10: Booted secondary processor 0x00000c0000 [0x413fd0c1] May 17 01:35:02.708993 kernel: Detected PIPT I-cache on CPU11 May 17 01:35:02.709000 kernel: GICv3: CPU11: found redistributor 240000 region 0:0x0000100100a40000 May 17 01:35:02.709006 kernel: GICv3: CPU11: using allocated LPI pending table @0x0000080000830000 May 17 01:35:02.709013 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709019 kernel: CPU11: Booted secondary processor 0x0000240000 [0x413fd0c1] May 17 01:35:02.709026 kernel: Detected PIPT I-cache on CPU12 May 17 01:35:02.709032 kernel: GICv3: CPU12: found redistributor 80000 region 0:0x0000100100340000 May 17 01:35:02.709040 kernel: GICv3: CPU12: using allocated LPI pending table @0x0000080000840000 May 17 01:35:02.709048 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709054 kernel: CPU12: Booted secondary processor 0x0000080000 [0x413fd0c1] May 17 01:35:02.709061 kernel: Detected PIPT I-cache on CPU13 May 17 01:35:02.709067 kernel: GICv3: CPU13: found redistributor 200000 region 0:0x0000100100940000 May 17 01:35:02.709074 kernel: GICv3: CPU13: using allocated LPI pending table @0x0000080000850000 May 17 01:35:02.709080 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709087 kernel: CPU13: Booted secondary processor 0x0000200000 [0x413fd0c1] May 17 01:35:02.709093 kernel: Detected PIPT I-cache on CPU14 May 17 01:35:02.709101 kernel: GICv3: CPU14: found redistributor e0000 region 0:0x00001001004c0000 May 17 01:35:02.709107 kernel: GICv3: CPU14: using allocated LPI pending table @0x0000080000860000 May 17 01:35:02.709114 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709120 kernel: CPU14: Booted secondary processor 0x00000e0000 [0x413fd0c1] May 17 01:35:02.709127 kernel: Detected PIPT I-cache on CPU15 May 17 01:35:02.709133 kernel: GICv3: CPU15: found redistributor 260000 region 0:0x0000100100ac0000 May 17 01:35:02.709140 kernel: GICv3: CPU15: using allocated LPI pending table @0x0000080000870000 May 17 01:35:02.709146 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709153 kernel: CPU15: Booted secondary processor 0x0000260000 [0x413fd0c1] May 17 01:35:02.709160 kernel: Detected PIPT I-cache on CPU16 May 17 01:35:02.709167 kernel: GICv3: CPU16: found redistributor 20000 region 0:0x00001001001c0000 May 17 01:35:02.709173 kernel: GICv3: CPU16: using allocated LPI pending table @0x0000080000880000 May 17 01:35:02.709180 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709186 kernel: CPU16: Booted secondary processor 0x0000020000 [0x413fd0c1] May 17 01:35:02.709192 kernel: Detected PIPT I-cache on CPU17 May 17 01:35:02.709199 kernel: GICv3: CPU17: found redistributor 40000 region 0:0x0000100100240000 May 17 01:35:02.709205 kernel: GICv3: CPU17: using allocated LPI pending table @0x0000080000890000 May 17 01:35:02.709212 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709218 kernel: CPU17: Booted secondary processor 0x0000040000 [0x413fd0c1] May 17 01:35:02.709232 kernel: Detected PIPT I-cache on CPU18 May 17 01:35:02.709240 kernel: GICv3: CPU18: found redistributor 0 region 0:0x0000100100140000 May 17 01:35:02.709247 kernel: GICv3: CPU18: using allocated LPI pending table @0x00000800008a0000 May 17 01:35:02.709254 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709260 kernel: CPU18: Booted secondary processor 0x0000000000 [0x413fd0c1] May 17 01:35:02.709267 kernel: Detected PIPT I-cache on CPU19 May 17 01:35:02.709274 kernel: GICv3: CPU19: found redistributor 60000 region 0:0x00001001002c0000 May 17 01:35:02.709281 kernel: GICv3: CPU19: using allocated LPI pending table @0x00000800008b0000 May 17 01:35:02.709288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709295 kernel: CPU19: Booted secondary processor 0x0000060000 [0x413fd0c1] May 17 01:35:02.709302 kernel: Detected PIPT I-cache on CPU20 May 17 01:35:02.709309 kernel: GICv3: CPU20: found redistributor 130000 region 0:0x0000100100600000 May 17 01:35:02.709316 kernel: GICv3: CPU20: using allocated LPI pending table @0x00000800008c0000 May 17 01:35:02.709322 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709329 kernel: CPU20: Booted secondary processor 0x0000130000 [0x413fd0c1] May 17 01:35:02.709336 kernel: Detected PIPT I-cache on CPU21 May 17 01:35:02.709344 kernel: GICv3: CPU21: found redistributor 1b0000 region 0:0x0000100100800000 May 17 01:35:02.709351 kernel: GICv3: CPU21: using allocated LPI pending table @0x00000800008d0000 May 17 01:35:02.709358 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709365 kernel: CPU21: Booted secondary processor 0x00001b0000 [0x413fd0c1] May 17 01:35:02.709372 kernel: Detected PIPT I-cache on CPU22 May 17 01:35:02.709380 kernel: GICv3: CPU22: found redistributor 150000 region 0:0x0000100100680000 May 17 01:35:02.709387 kernel: GICv3: CPU22: using allocated LPI pending table @0x00000800008e0000 May 17 01:35:02.709394 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709401 kernel: CPU22: Booted secondary processor 0x0000150000 [0x413fd0c1] May 17 01:35:02.709408 kernel: Detected PIPT I-cache on CPU23 May 17 01:35:02.709415 kernel: GICv3: CPU23: found redistributor 1d0000 region 0:0x0000100100880000 May 17 01:35:02.709422 kernel: GICv3: CPU23: using allocated LPI pending table @0x00000800008f0000 May 17 01:35:02.709429 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709436 kernel: CPU23: Booted secondary processor 0x00001d0000 [0x413fd0c1] May 17 01:35:02.709443 kernel: Detected PIPT I-cache on CPU24 May 17 01:35:02.709449 kernel: GICv3: CPU24: found redistributor 110000 region 0:0x0000100100580000 May 17 01:35:02.709456 kernel: GICv3: CPU24: using allocated LPI pending table @0x0000080000900000 May 17 01:35:02.709463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709471 kernel: CPU24: Booted secondary processor 0x0000110000 [0x413fd0c1] May 17 01:35:02.709478 kernel: Detected PIPT I-cache on CPU25 May 17 01:35:02.709484 kernel: GICv3: CPU25: found redistributor 190000 region 0:0x0000100100780000 May 17 01:35:02.709491 kernel: GICv3: CPU25: using allocated LPI pending table @0x0000080000910000 May 17 01:35:02.709498 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709505 kernel: CPU25: Booted secondary processor 0x0000190000 [0x413fd0c1] May 17 01:35:02.709512 kernel: Detected PIPT I-cache on CPU26 May 17 01:35:02.709519 kernel: GICv3: CPU26: found redistributor 170000 region 0:0x0000100100700000 May 17 01:35:02.709526 kernel: GICv3: CPU26: using allocated LPI pending table @0x0000080000920000 May 17 01:35:02.709533 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709540 kernel: CPU26: Booted secondary processor 0x0000170000 [0x413fd0c1] May 17 01:35:02.709547 kernel: Detected PIPT I-cache on CPU27 May 17 01:35:02.709554 kernel: GICv3: CPU27: found redistributor 1f0000 region 0:0x0000100100900000 May 17 01:35:02.709561 kernel: GICv3: CPU27: using allocated LPI pending table @0x0000080000930000 May 17 01:35:02.709567 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709574 kernel: CPU27: Booted secondary processor 0x00001f0000 [0x413fd0c1] May 17 01:35:02.709581 kernel: Detected PIPT I-cache on CPU28 May 17 01:35:02.709588 kernel: GICv3: CPU28: found redistributor b0000 region 0:0x0000100100400000 May 17 01:35:02.709595 kernel: GICv3: CPU28: using allocated LPI pending table @0x0000080000940000 May 17 01:35:02.709603 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709610 kernel: CPU28: Booted secondary processor 0x00000b0000 [0x413fd0c1] May 17 01:35:02.709617 kernel: Detected PIPT I-cache on CPU29 May 17 01:35:02.709623 kernel: GICv3: CPU29: found redistributor 230000 region 0:0x0000100100a00000 May 17 01:35:02.709630 kernel: GICv3: CPU29: using allocated LPI pending table @0x0000080000950000 May 17 01:35:02.709637 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709644 kernel: CPU29: Booted secondary processor 0x0000230000 [0x413fd0c1] May 17 01:35:02.709651 kernel: Detected PIPT I-cache on CPU30 May 17 01:35:02.709657 kernel: GICv3: CPU30: found redistributor d0000 region 0:0x0000100100480000 May 17 01:35:02.709665 kernel: GICv3: CPU30: using allocated LPI pending table @0x0000080000960000 May 17 01:35:02.709672 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709679 kernel: CPU30: Booted secondary processor 0x00000d0000 [0x413fd0c1] May 17 01:35:02.709686 kernel: Detected PIPT I-cache on CPU31 May 17 01:35:02.709693 kernel: GICv3: CPU31: found redistributor 250000 region 0:0x0000100100a80000 May 17 01:35:02.709700 kernel: GICv3: CPU31: using allocated LPI pending table @0x0000080000970000 May 17 01:35:02.709707 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709713 kernel: CPU31: Booted secondary processor 0x0000250000 [0x413fd0c1] May 17 01:35:02.709720 kernel: Detected PIPT I-cache on CPU32 May 17 01:35:02.709727 kernel: GICv3: CPU32: found redistributor 90000 region 0:0x0000100100380000 May 17 01:35:02.709735 kernel: GICv3: CPU32: using allocated LPI pending table @0x0000080000980000 May 17 01:35:02.709742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709749 kernel: CPU32: Booted secondary processor 0x0000090000 [0x413fd0c1] May 17 01:35:02.709755 kernel: Detected PIPT I-cache on CPU33 May 17 01:35:02.709762 kernel: GICv3: CPU33: found redistributor 210000 region 0:0x0000100100980000 May 17 01:35:02.709769 kernel: GICv3: CPU33: using allocated LPI pending table @0x0000080000990000 May 17 01:35:02.709776 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709782 kernel: CPU33: Booted secondary processor 0x0000210000 [0x413fd0c1] May 17 01:35:02.709789 kernel: Detected PIPT I-cache on CPU34 May 17 01:35:02.709797 kernel: GICv3: CPU34: found redistributor f0000 region 0:0x0000100100500000 May 17 01:35:02.709804 kernel: GICv3: CPU34: using allocated LPI pending table @0x00000800009a0000 May 17 01:35:02.709811 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709818 kernel: CPU34: Booted secondary processor 0x00000f0000 [0x413fd0c1] May 17 01:35:02.709824 kernel: Detected PIPT I-cache on CPU35 May 17 01:35:02.709831 kernel: GICv3: CPU35: found redistributor 270000 region 0:0x0000100100b00000 May 17 01:35:02.709838 kernel: GICv3: CPU35: using allocated LPI pending table @0x00000800009b0000 May 17 01:35:02.709846 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709853 kernel: CPU35: Booted secondary processor 0x0000270000 [0x413fd0c1] May 17 01:35:02.709859 kernel: Detected PIPT I-cache on CPU36 May 17 01:35:02.709867 kernel: GICv3: CPU36: found redistributor 30000 region 0:0x0000100100200000 May 17 01:35:02.709874 kernel: GICv3: CPU36: using allocated LPI pending table @0x00000800009c0000 May 17 01:35:02.709881 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709888 kernel: CPU36: Booted secondary processor 0x0000030000 [0x413fd0c1] May 17 01:35:02.709894 kernel: Detected PIPT I-cache on CPU37 May 17 01:35:02.709901 kernel: GICv3: CPU37: found redistributor 50000 region 0:0x0000100100280000 May 17 01:35:02.709908 kernel: GICv3: CPU37: using allocated LPI pending table @0x00000800009d0000 May 17 01:35:02.709915 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709922 kernel: CPU37: Booted secondary processor 0x0000050000 [0x413fd0c1] May 17 01:35:02.709929 kernel: Detected PIPT I-cache on CPU38 May 17 01:35:02.709936 kernel: GICv3: CPU38: found redistributor 10000 region 0:0x0000100100180000 May 17 01:35:02.709943 kernel: GICv3: CPU38: using allocated LPI pending table @0x00000800009e0000 May 17 01:35:02.709950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709956 kernel: CPU38: Booted secondary processor 0x0000010000 [0x413fd0c1] May 17 01:35:02.709963 kernel: Detected PIPT I-cache on CPU39 May 17 01:35:02.709970 kernel: GICv3: CPU39: found redistributor 70000 region 0:0x0000100100300000 May 17 01:35:02.709977 kernel: GICv3: CPU39: using allocated LPI pending table @0x00000800009f0000 May 17 01:35:02.709984 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.709991 kernel: CPU39: Booted secondary processor 0x0000070000 [0x413fd0c1] May 17 01:35:02.709998 kernel: Detected PIPT I-cache on CPU40 May 17 01:35:02.710005 kernel: GICv3: CPU40: found redistributor 120100 region 0:0x00001001005e0000 May 17 01:35:02.710012 kernel: GICv3: CPU40: using allocated LPI pending table @0x0000080000a00000 May 17 01:35:02.710019 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710026 kernel: CPU40: Booted secondary processor 0x0000120100 [0x413fd0c1] May 17 01:35:02.710033 kernel: Detected PIPT I-cache on CPU41 May 17 01:35:02.710039 kernel: GICv3: CPU41: found redistributor 1a0100 region 0:0x00001001007e0000 May 17 01:35:02.710049 kernel: GICv3: CPU41: using allocated LPI pending table @0x0000080000a10000 May 17 01:35:02.710056 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710063 kernel: CPU41: Booted secondary processor 0x00001a0100 [0x413fd0c1] May 17 01:35:02.710069 kernel: Detected PIPT I-cache on CPU42 May 17 01:35:02.710076 kernel: GICv3: CPU42: found redistributor 140100 region 0:0x0000100100660000 May 17 01:35:02.710083 kernel: GICv3: CPU42: using allocated LPI pending table @0x0000080000a20000 May 17 01:35:02.710090 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710097 kernel: CPU42: Booted secondary processor 0x0000140100 [0x413fd0c1] May 17 01:35:02.710104 kernel: Detected PIPT I-cache on CPU43 May 17 01:35:02.710110 kernel: GICv3: CPU43: found redistributor 1c0100 region 0:0x0000100100860000 May 17 01:35:02.710118 kernel: GICv3: CPU43: using allocated LPI pending table @0x0000080000a30000 May 17 01:35:02.710125 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710132 kernel: CPU43: Booted secondary processor 0x00001c0100 [0x413fd0c1] May 17 01:35:02.710139 kernel: Detected PIPT I-cache on CPU44 May 17 01:35:02.710146 kernel: GICv3: CPU44: found redistributor 100100 region 0:0x0000100100560000 May 17 01:35:02.710153 kernel: GICv3: CPU44: using allocated LPI pending table @0x0000080000a40000 May 17 01:35:02.710159 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710166 kernel: CPU44: Booted secondary processor 0x0000100100 [0x413fd0c1] May 17 01:35:02.710173 kernel: Detected PIPT I-cache on CPU45 May 17 01:35:02.710181 kernel: GICv3: CPU45: found redistributor 180100 region 0:0x0000100100760000 May 17 01:35:02.710188 kernel: GICv3: CPU45: using allocated LPI pending table @0x0000080000a50000 May 17 01:35:02.710195 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710201 kernel: CPU45: Booted secondary processor 0x0000180100 [0x413fd0c1] May 17 01:35:02.710208 kernel: Detected PIPT I-cache on CPU46 May 17 01:35:02.710215 kernel: GICv3: CPU46: found redistributor 160100 region 0:0x00001001006e0000 May 17 01:35:02.710222 kernel: GICv3: CPU46: using allocated LPI pending table @0x0000080000a60000 May 17 01:35:02.710228 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710235 kernel: CPU46: Booted secondary processor 0x0000160100 [0x413fd0c1] May 17 01:35:02.710242 kernel: Detected PIPT I-cache on CPU47 May 17 01:35:02.710250 kernel: GICv3: CPU47: found redistributor 1e0100 region 0:0x00001001008e0000 May 17 01:35:02.710258 kernel: GICv3: CPU47: using allocated LPI pending table @0x0000080000a70000 May 17 01:35:02.710264 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710271 kernel: CPU47: Booted secondary processor 0x00001e0100 [0x413fd0c1] May 17 01:35:02.710278 kernel: Detected PIPT I-cache on CPU48 May 17 01:35:02.710285 kernel: GICv3: CPU48: found redistributor a0100 region 0:0x00001001003e0000 May 17 01:35:02.710292 kernel: GICv3: CPU48: using allocated LPI pending table @0x0000080000a80000 May 17 01:35:02.710298 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710305 kernel: CPU48: Booted secondary processor 0x00000a0100 [0x413fd0c1] May 17 01:35:02.710313 kernel: Detected PIPT I-cache on CPU49 May 17 01:35:02.710320 kernel: GICv3: CPU49: found redistributor 220100 region 0:0x00001001009e0000 May 17 01:35:02.710326 kernel: GICv3: CPU49: using allocated LPI pending table @0x0000080000a90000 May 17 01:35:02.710333 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710340 kernel: CPU49: Booted secondary processor 0x0000220100 [0x413fd0c1] May 17 01:35:02.710347 kernel: Detected PIPT I-cache on CPU50 May 17 01:35:02.710353 kernel: GICv3: CPU50: found redistributor c0100 region 0:0x0000100100460000 May 17 01:35:02.710360 kernel: GICv3: CPU50: using allocated LPI pending table @0x0000080000aa0000 May 17 01:35:02.710367 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710374 kernel: CPU50: Booted secondary processor 0x00000c0100 [0x413fd0c1] May 17 01:35:02.710381 kernel: Detected PIPT I-cache on CPU51 May 17 01:35:02.710388 kernel: GICv3: CPU51: found redistributor 240100 region 0:0x0000100100a60000 May 17 01:35:02.710395 kernel: GICv3: CPU51: using allocated LPI pending table @0x0000080000ab0000 May 17 01:35:02.710402 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710409 kernel: CPU51: Booted secondary processor 0x0000240100 [0x413fd0c1] May 17 01:35:02.710416 kernel: Detected PIPT I-cache on CPU52 May 17 01:35:02.710422 kernel: GICv3: CPU52: found redistributor 80100 region 0:0x0000100100360000 May 17 01:35:02.710429 kernel: GICv3: CPU52: using allocated LPI pending table @0x0000080000ac0000 May 17 01:35:02.710436 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710444 kernel: CPU52: Booted secondary processor 0x0000080100 [0x413fd0c1] May 17 01:35:02.710450 kernel: Detected PIPT I-cache on CPU53 May 17 01:35:02.710457 kernel: GICv3: CPU53: found redistributor 200100 region 0:0x0000100100960000 May 17 01:35:02.710464 kernel: GICv3: CPU53: using allocated LPI pending table @0x0000080000ad0000 May 17 01:35:02.710471 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710478 kernel: CPU53: Booted secondary processor 0x0000200100 [0x413fd0c1] May 17 01:35:02.710485 kernel: Detected PIPT I-cache on CPU54 May 17 01:35:02.710492 kernel: GICv3: CPU54: found redistributor e0100 region 0:0x00001001004e0000 May 17 01:35:02.710499 kernel: GICv3: CPU54: using allocated LPI pending table @0x0000080000ae0000 May 17 01:35:02.710507 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710514 kernel: CPU54: Booted secondary processor 0x00000e0100 [0x413fd0c1] May 17 01:35:02.710521 kernel: Detected PIPT I-cache on CPU55 May 17 01:35:02.710528 kernel: GICv3: CPU55: found redistributor 260100 region 0:0x0000100100ae0000 May 17 01:35:02.710535 kernel: GICv3: CPU55: using allocated LPI pending table @0x0000080000af0000 May 17 01:35:02.710541 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710548 kernel: CPU55: Booted secondary processor 0x0000260100 [0x413fd0c1] May 17 01:35:02.710555 kernel: Detected PIPT I-cache on CPU56 May 17 01:35:02.710561 kernel: GICv3: CPU56: found redistributor 20100 region 0:0x00001001001e0000 May 17 01:35:02.710568 kernel: GICv3: CPU56: using allocated LPI pending table @0x0000080000b00000 May 17 01:35:02.710576 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710583 kernel: CPU56: Booted secondary processor 0x0000020100 [0x413fd0c1] May 17 01:35:02.710590 kernel: Detected PIPT I-cache on CPU57 May 17 01:35:02.710596 kernel: GICv3: CPU57: found redistributor 40100 region 0:0x0000100100260000 May 17 01:35:02.710603 kernel: GICv3: CPU57: using allocated LPI pending table @0x0000080000b10000 May 17 01:35:02.710610 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710617 kernel: CPU57: Booted secondary processor 0x0000040100 [0x413fd0c1] May 17 01:35:02.710623 kernel: Detected PIPT I-cache on CPU58 May 17 01:35:02.710630 kernel: GICv3: CPU58: found redistributor 100 region 0:0x0000100100160000 May 17 01:35:02.710638 kernel: GICv3: CPU58: using allocated LPI pending table @0x0000080000b20000 May 17 01:35:02.710645 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710652 kernel: CPU58: Booted secondary processor 0x0000000100 [0x413fd0c1] May 17 01:35:02.710659 kernel: Detected PIPT I-cache on CPU59 May 17 01:35:02.710665 kernel: GICv3: CPU59: found redistributor 60100 region 0:0x00001001002e0000 May 17 01:35:02.710672 kernel: GICv3: CPU59: using allocated LPI pending table @0x0000080000b30000 May 17 01:35:02.710679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710686 kernel: CPU59: Booted secondary processor 0x0000060100 [0x413fd0c1] May 17 01:35:02.710693 kernel: Detected PIPT I-cache on CPU60 May 17 01:35:02.710699 kernel: GICv3: CPU60: found redistributor 130100 region 0:0x0000100100620000 May 17 01:35:02.710707 kernel: GICv3: CPU60: using allocated LPI pending table @0x0000080000b40000 May 17 01:35:02.710714 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710721 kernel: CPU60: Booted secondary processor 0x0000130100 [0x413fd0c1] May 17 01:35:02.710727 kernel: Detected PIPT I-cache on CPU61 May 17 01:35:02.710734 kernel: GICv3: CPU61: found redistributor 1b0100 region 0:0x0000100100820000 May 17 01:35:02.710741 kernel: GICv3: CPU61: using allocated LPI pending table @0x0000080000b50000 May 17 01:35:02.710748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710755 kernel: CPU61: Booted secondary processor 0x00001b0100 [0x413fd0c1] May 17 01:35:02.710761 kernel: Detected PIPT I-cache on CPU62 May 17 01:35:02.710769 kernel: GICv3: CPU62: found redistributor 150100 region 0:0x00001001006a0000 May 17 01:35:02.710776 kernel: GICv3: CPU62: using allocated LPI pending table @0x0000080000b60000 May 17 01:35:02.710783 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710790 kernel: CPU62: Booted secondary processor 0x0000150100 [0x413fd0c1] May 17 01:35:02.710796 kernel: Detected PIPT I-cache on CPU63 May 17 01:35:02.710803 kernel: GICv3: CPU63: found redistributor 1d0100 region 0:0x00001001008a0000 May 17 01:35:02.710810 kernel: GICv3: CPU63: using allocated LPI pending table @0x0000080000b70000 May 17 01:35:02.710817 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710824 kernel: CPU63: Booted secondary processor 0x00001d0100 [0x413fd0c1] May 17 01:35:02.710830 kernel: Detected PIPT I-cache on CPU64 May 17 01:35:02.710838 kernel: GICv3: CPU64: found redistributor 110100 region 0:0x00001001005a0000 May 17 01:35:02.710845 kernel: GICv3: CPU64: using allocated LPI pending table @0x0000080000b80000 May 17 01:35:02.710852 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710859 kernel: CPU64: Booted secondary processor 0x0000110100 [0x413fd0c1] May 17 01:35:02.710865 kernel: Detected PIPT I-cache on CPU65 May 17 01:35:02.710872 kernel: GICv3: CPU65: found redistributor 190100 region 0:0x00001001007a0000 May 17 01:35:02.710879 kernel: GICv3: CPU65: using allocated LPI pending table @0x0000080000b90000 May 17 01:35:02.710886 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710892 kernel: CPU65: Booted secondary processor 0x0000190100 [0x413fd0c1] May 17 01:35:02.710900 kernel: Detected PIPT I-cache on CPU66 May 17 01:35:02.710907 kernel: GICv3: CPU66: found redistributor 170100 region 0:0x0000100100720000 May 17 01:35:02.710914 kernel: GICv3: CPU66: using allocated LPI pending table @0x0000080000ba0000 May 17 01:35:02.710921 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710927 kernel: CPU66: Booted secondary processor 0x0000170100 [0x413fd0c1] May 17 01:35:02.710934 kernel: Detected PIPT I-cache on CPU67 May 17 01:35:02.710941 kernel: GICv3: CPU67: found redistributor 1f0100 region 0:0x0000100100920000 May 17 01:35:02.710948 kernel: GICv3: CPU67: using allocated LPI pending table @0x0000080000bb0000 May 17 01:35:02.710955 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710961 kernel: CPU67: Booted secondary processor 0x00001f0100 [0x413fd0c1] May 17 01:35:02.710969 kernel: Detected PIPT I-cache on CPU68 May 17 01:35:02.710976 kernel: GICv3: CPU68: found redistributor b0100 region 0:0x0000100100420000 May 17 01:35:02.710983 kernel: GICv3: CPU68: using allocated LPI pending table @0x0000080000bc0000 May 17 01:35:02.710990 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.710996 kernel: CPU68: Booted secondary processor 0x00000b0100 [0x413fd0c1] May 17 01:35:02.711003 kernel: Detected PIPT I-cache on CPU69 May 17 01:35:02.711010 kernel: GICv3: CPU69: found redistributor 230100 region 0:0x0000100100a20000 May 17 01:35:02.711017 kernel: GICv3: CPU69: using allocated LPI pending table @0x0000080000bd0000 May 17 01:35:02.711024 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711032 kernel: CPU69: Booted secondary processor 0x0000230100 [0x413fd0c1] May 17 01:35:02.711038 kernel: Detected PIPT I-cache on CPU70 May 17 01:35:02.711047 kernel: GICv3: CPU70: found redistributor d0100 region 0:0x00001001004a0000 May 17 01:35:02.711054 kernel: GICv3: CPU70: using allocated LPI pending table @0x0000080000be0000 May 17 01:35:02.711061 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711068 kernel: CPU70: Booted secondary processor 0x00000d0100 [0x413fd0c1] May 17 01:35:02.711074 kernel: Detected PIPT I-cache on CPU71 May 17 01:35:02.711081 kernel: GICv3: CPU71: found redistributor 250100 region 0:0x0000100100aa0000 May 17 01:35:02.711088 kernel: GICv3: CPU71: using allocated LPI pending table @0x0000080000bf0000 May 17 01:35:02.711096 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711103 kernel: CPU71: Booted secondary processor 0x0000250100 [0x413fd0c1] May 17 01:35:02.711109 kernel: Detected PIPT I-cache on CPU72 May 17 01:35:02.711116 kernel: GICv3: CPU72: found redistributor 90100 region 0:0x00001001003a0000 May 17 01:35:02.711123 kernel: GICv3: CPU72: using allocated LPI pending table @0x0000080000c00000 May 17 01:35:02.711130 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711137 kernel: CPU72: Booted secondary processor 0x0000090100 [0x413fd0c1] May 17 01:35:02.711143 kernel: Detected PIPT I-cache on CPU73 May 17 01:35:02.711150 kernel: GICv3: CPU73: found redistributor 210100 region 0:0x00001001009a0000 May 17 01:35:02.711157 kernel: GICv3: CPU73: using allocated LPI pending table @0x0000080000c10000 May 17 01:35:02.711165 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711171 kernel: CPU73: Booted secondary processor 0x0000210100 [0x413fd0c1] May 17 01:35:02.711178 kernel: Detected PIPT I-cache on CPU74 May 17 01:35:02.711185 kernel: GICv3: CPU74: found redistributor f0100 region 0:0x0000100100520000 May 17 01:35:02.711192 kernel: GICv3: CPU74: using allocated LPI pending table @0x0000080000c20000 May 17 01:35:02.711198 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711205 kernel: CPU74: Booted secondary processor 0x00000f0100 [0x413fd0c1] May 17 01:35:02.711212 kernel: Detected PIPT I-cache on CPU75 May 17 01:35:02.711219 kernel: GICv3: CPU75: found redistributor 270100 region 0:0x0000100100b20000 May 17 01:35:02.711227 kernel: GICv3: CPU75: using allocated LPI pending table @0x0000080000c30000 May 17 01:35:02.711234 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711241 kernel: CPU75: Booted secondary processor 0x0000270100 [0x413fd0c1] May 17 01:35:02.711247 kernel: Detected PIPT I-cache on CPU76 May 17 01:35:02.711254 kernel: GICv3: CPU76: found redistributor 30100 region 0:0x0000100100220000 May 17 01:35:02.711261 kernel: GICv3: CPU76: using allocated LPI pending table @0x0000080000c40000 May 17 01:35:02.711268 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711274 kernel: CPU76: Booted secondary processor 0x0000030100 [0x413fd0c1] May 17 01:35:02.711281 kernel: Detected PIPT I-cache on CPU77 May 17 01:35:02.711288 kernel: GICv3: CPU77: found redistributor 50100 region 0:0x00001001002a0000 May 17 01:35:02.711295 kernel: GICv3: CPU77: using allocated LPI pending table @0x0000080000c50000 May 17 01:35:02.711302 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711309 kernel: CPU77: Booted secondary processor 0x0000050100 [0x413fd0c1] May 17 01:35:02.711316 kernel: Detected PIPT I-cache on CPU78 May 17 01:35:02.711322 kernel: GICv3: CPU78: found redistributor 10100 region 0:0x00001001001a0000 May 17 01:35:02.711329 kernel: GICv3: CPU78: using allocated LPI pending table @0x0000080000c60000 May 17 01:35:02.711336 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711343 kernel: CPU78: Booted secondary processor 0x0000010100 [0x413fd0c1] May 17 01:35:02.711349 kernel: Detected PIPT I-cache on CPU79 May 17 01:35:02.711357 kernel: GICv3: CPU79: found redistributor 70100 region 0:0x0000100100320000 May 17 01:35:02.711364 kernel: GICv3: CPU79: using allocated LPI pending table @0x0000080000c70000 May 17 01:35:02.711371 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 17 01:35:02.711378 kernel: CPU79: Booted secondary processor 0x0000070100 [0x413fd0c1] May 17 01:35:02.711384 kernel: smp: Brought up 1 node, 80 CPUs May 17 01:35:02.711391 kernel: SMP: Total of 80 processors activated. May 17 01:35:02.711398 kernel: CPU features: detected: 32-bit EL0 Support May 17 01:35:02.711405 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 17 01:35:02.711412 kernel: CPU features: detected: Common not Private translations May 17 01:35:02.711418 kernel: CPU features: detected: CRC32 instructions May 17 01:35:02.711426 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 17 01:35:02.711433 kernel: CPU features: detected: LSE atomic instructions May 17 01:35:02.711440 kernel: CPU features: detected: Privileged Access Never May 17 01:35:02.711447 kernel: CPU features: detected: RAS Extension Support May 17 01:35:02.711453 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 17 01:35:02.711460 kernel: CPU: All CPU(s) started at EL2 May 17 01:35:02.711467 kernel: devtmpfs: initialized May 17 01:35:02.711474 kernel: KASLR enabled May 17 01:35:02.711481 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 01:35:02.711488 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) May 17 01:35:02.711495 kernel: pinctrl core: initialized pinctrl subsystem May 17 01:35:02.711502 kernel: SMBIOS 3.4.0 present. May 17 01:35:02.711509 kernel: DMI: GIGABYTE R272-P30-JG/MP32-AR0-JG, BIOS F17a (SCP: 1.07.20210713) 07/22/2021 May 17 01:35:02.711516 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 01:35:02.711523 kernel: DMA: preallocated 4096 KiB GFP_KERNEL pool for atomic allocations May 17 01:35:02.711529 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 17 01:35:02.711536 kernel: DMA: preallocated 4096 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 17 01:35:02.711543 kernel: audit: initializing netlink subsys (disabled) May 17 01:35:02.711551 kernel: audit: type=2000 audit(0.053:1): state=initialized audit_enabled=0 res=1 May 17 01:35:02.711558 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 01:35:02.711565 kernel: cpuidle: using governor menu May 17 01:35:02.711571 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 17 01:35:02.711578 kernel: ASID allocator initialised with 32768 entries May 17 01:35:02.711585 kernel: ACPI: bus type PCI registered May 17 01:35:02.711592 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 01:35:02.711598 kernel: Serial: AMBA PL011 UART driver May 17 01:35:02.711605 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 01:35:02.711613 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 17 01:35:02.711620 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 01:35:02.711627 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 17 01:35:02.711634 kernel: cryptd: max_cpu_qlen set to 1000 May 17 01:35:02.711641 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 17 01:35:02.711648 kernel: ACPI: Added _OSI(Module Device) May 17 01:35:02.711655 kernel: ACPI: Added _OSI(Processor Device) May 17 01:35:02.711661 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 01:35:02.711668 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 01:35:02.711676 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 01:35:02.711683 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 01:35:02.711690 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 01:35:02.711697 kernel: ACPI: 2 ACPI AML tables successfully acquired and loaded May 17 01:35:02.711704 kernel: ACPI: Interpreter enabled May 17 01:35:02.711710 kernel: ACPI: Using GIC for interrupt routing May 17 01:35:02.711717 kernel: ACPI: MCFG table detected, 8 entries May 17 01:35:02.711724 kernel: ACPI: IORT: SMMU-v3[33ffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711730 kernel: ACPI: IORT: SMMU-v3[37ffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711738 kernel: ACPI: IORT: SMMU-v3[3bffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711745 kernel: ACPI: IORT: SMMU-v3[3fffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711752 kernel: ACPI: IORT: SMMU-v3[23ffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711759 kernel: ACPI: IORT: SMMU-v3[27ffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711766 kernel: ACPI: IORT: SMMU-v3[2bffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711773 kernel: ACPI: IORT: SMMU-v3[2fffe0000000] Mapped to Proximity domain 0 May 17 01:35:02.711779 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x100002600000 (irq = 19, base_baud = 0) is a SBSA May 17 01:35:02.711786 kernel: printk: console [ttyAMA0] enabled May 17 01:35:02.711793 kernel: ARMH0011:01: ttyAMA1 at MMIO 0x100002620000 (irq = 20, base_baud = 0) is a SBSA May 17 01:35:02.711801 kernel: ACPI: PCI Root Bridge [PCI1] (domain 000d [bus 00-ff]) May 17 01:35:02.711918 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.711984 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.712042 kernel: acpi PNP0A08:00: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.712104 kernel: acpi PNP0A08:00: MCFG quirk: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.712160 kernel: acpi PNP0A08:00: ECAM area [mem 0x37fff0000000-0x37ffffffffff] reserved by PNP0C02:00 May 17 01:35:02.712217 kernel: acpi PNP0A08:00: ECAM at [mem 0x37fff0000000-0x37ffffffffff] for [bus 00-ff] May 17 01:35:02.712228 kernel: PCI host bridge to bus 000d:00 May 17 01:35:02.712292 kernel: pci_bus 000d:00: root bus resource [mem 0x50000000-0x5fffffff window] May 17 01:35:02.712346 kernel: pci_bus 000d:00: root bus resource [mem 0x340000000000-0x37ffdfffffff window] May 17 01:35:02.712397 kernel: pci_bus 000d:00: root bus resource [bus 00-ff] May 17 01:35:02.712467 kernel: pci 000d:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:35:02.712536 kernel: pci 000d:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:35:02.712597 kernel: pci 000d:00:01.0: enabling Extended Tags May 17 01:35:02.712657 kernel: pci 000d:00:01.0: supports D1 D2 May 17 01:35:02.712714 kernel: pci 000d:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.712781 kernel: pci 000d:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:35:02.712840 kernel: pci 000d:00:02.0: supports D1 D2 May 17 01:35:02.712898 kernel: pci 000d:00:02.0: PME# supported from D0 D1 D3hot May 17 01:35:02.712963 kernel: pci 000d:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:35:02.713022 kernel: pci 000d:00:03.0: supports D1 D2 May 17 01:35:02.713084 kernel: pci 000d:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.713150 kernel: pci 000d:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:35:02.713209 kernel: pci 000d:00:04.0: supports D1 D2 May 17 01:35:02.713267 kernel: pci 000d:00:04.0: PME# supported from D0 D1 D3hot May 17 01:35:02.713275 kernel: acpiphp: Slot [1] registered May 17 01:35:02.713284 kernel: acpiphp: Slot [2] registered May 17 01:35:02.713291 kernel: acpiphp: Slot [3] registered May 17 01:35:02.713298 kernel: acpiphp: Slot [4] registered May 17 01:35:02.713348 kernel: pci_bus 000d:00: on NUMA node 0 May 17 01:35:02.713407 kernel: pci 000d:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:35:02.713465 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.713523 kernel: pci 000d:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.713581 kernel: pci 000d:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:35:02.713642 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.713700 kernel: pci 000d:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.713765 kernel: pci 000d:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:35:02.713822 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.713882 kernel: pci 000d:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.713943 kernel: pci 000d:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:35:02.714001 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.714066 kernel: pci 000d:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.714126 kernel: pci 000d:00:01.0: BAR 14: assigned [mem 0x50000000-0x501fffff] May 17 01:35:02.714189 kernel: pci 000d:00:01.0: BAR 15: assigned [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:35:02.714249 kernel: pci 000d:00:02.0: BAR 14: assigned [mem 0x50200000-0x503fffff] May 17 01:35:02.714307 kernel: pci 000d:00:02.0: BAR 15: assigned [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:35:02.714369 kernel: pci 000d:00:03.0: BAR 14: assigned [mem 0x50400000-0x505fffff] May 17 01:35:02.714429 kernel: pci 000d:00:03.0: BAR 15: assigned [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:35:02.714491 kernel: pci 000d:00:04.0: BAR 14: assigned [mem 0x50600000-0x507fffff] May 17 01:35:02.714549 kernel: pci 000d:00:04.0: BAR 15: assigned [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:35:02.714608 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.714666 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.714724 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.714782 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.714841 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.714900 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.714960 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.715020 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.715082 kernel: pci 000d:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.715141 kernel: pci 000d:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.715199 kernel: pci 000d:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.715258 kernel: pci 000d:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.715315 kernel: pci 000d:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.715378 kernel: pci 000d:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.715443 kernel: pci 000d:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.715500 kernel: pci 000d:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.715558 kernel: pci 000d:00:01.0: PCI bridge to [bus 01] May 17 01:35:02.715616 kernel: pci 000d:00:01.0: bridge window [mem 0x50000000-0x501fffff] May 17 01:35:02.715674 kernel: pci 000d:00:01.0: bridge window [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:35:02.715731 kernel: pci 000d:00:02.0: PCI bridge to [bus 02] May 17 01:35:02.715789 kernel: pci 000d:00:02.0: bridge window [mem 0x50200000-0x503fffff] May 17 01:35:02.715848 kernel: pci 000d:00:02.0: bridge window [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:35:02.715905 kernel: pci 000d:00:03.0: PCI bridge to [bus 03] May 17 01:35:02.715963 kernel: pci 000d:00:03.0: bridge window [mem 0x50400000-0x505fffff] May 17 01:35:02.716022 kernel: pci 000d:00:03.0: bridge window [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:35:02.716084 kernel: pci 000d:00:04.0: PCI bridge to [bus 04] May 17 01:35:02.716142 kernel: pci 000d:00:04.0: bridge window [mem 0x50600000-0x507fffff] May 17 01:35:02.716201 kernel: pci 000d:00:04.0: bridge window [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:35:02.716255 kernel: pci_bus 000d:00: resource 4 [mem 0x50000000-0x5fffffff window] May 17 01:35:02.716305 kernel: pci_bus 000d:00: resource 5 [mem 0x340000000000-0x37ffdfffffff window] May 17 01:35:02.716371 kernel: pci_bus 000d:01: resource 1 [mem 0x50000000-0x501fffff] May 17 01:35:02.716427 kernel: pci_bus 000d:01: resource 2 [mem 0x340000000000-0x3400001fffff 64bit pref] May 17 01:35:02.716489 kernel: pci_bus 000d:02: resource 1 [mem 0x50200000-0x503fffff] May 17 01:35:02.716546 kernel: pci_bus 000d:02: resource 2 [mem 0x340000200000-0x3400003fffff 64bit pref] May 17 01:35:02.716616 kernel: pci_bus 000d:03: resource 1 [mem 0x50400000-0x505fffff] May 17 01:35:02.716671 kernel: pci_bus 000d:03: resource 2 [mem 0x340000400000-0x3400005fffff 64bit pref] May 17 01:35:02.716732 kernel: pci_bus 000d:04: resource 1 [mem 0x50600000-0x507fffff] May 17 01:35:02.716788 kernel: pci_bus 000d:04: resource 2 [mem 0x340000600000-0x3400007fffff 64bit pref] May 17 01:35:02.716797 kernel: ACPI: PCI Root Bridge [PCI3] (domain 0000 [bus 00-ff]) May 17 01:35:02.716860 kernel: acpi PNP0A08:01: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.716920 kernel: acpi PNP0A08:01: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.716976 kernel: acpi PNP0A08:01: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.717032 kernel: acpi PNP0A08:01: MCFG quirk: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.717099 kernel: acpi PNP0A08:01: ECAM area [mem 0x3ffff0000000-0x3fffffffffff] reserved by PNP0C02:00 May 17 01:35:02.717155 kernel: acpi PNP0A08:01: ECAM at [mem 0x3ffff0000000-0x3fffffffffff] for [bus 00-ff] May 17 01:35:02.717165 kernel: PCI host bridge to bus 0000:00 May 17 01:35:02.717225 kernel: pci_bus 0000:00: root bus resource [mem 0x70000000-0x7fffffff window] May 17 01:35:02.717279 kernel: pci_bus 0000:00: root bus resource [mem 0x3c0000000000-0x3fffdfffffff window] May 17 01:35:02.717331 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 01:35:02.717396 kernel: pci 0000:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:35:02.717461 kernel: pci 0000:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:35:02.717521 kernel: pci 0000:00:01.0: enabling Extended Tags May 17 01:35:02.717580 kernel: pci 0000:00:01.0: supports D1 D2 May 17 01:35:02.717640 kernel: pci 0000:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.717706 kernel: pci 0000:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:35:02.717765 kernel: pci 0000:00:02.0: supports D1 D2 May 17 01:35:02.717823 kernel: pci 0000:00:02.0: PME# supported from D0 D1 D3hot May 17 01:35:02.717887 kernel: pci 0000:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:35:02.717947 kernel: pci 0000:00:03.0: supports D1 D2 May 17 01:35:02.718004 kernel: pci 0000:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.718076 kernel: pci 0000:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:35:02.718134 kernel: pci 0000:00:04.0: supports D1 D2 May 17 01:35:02.718196 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D3hot May 17 01:35:02.718205 kernel: acpiphp: Slot [1-1] registered May 17 01:35:02.718212 kernel: acpiphp: Slot [2-1] registered May 17 01:35:02.718219 kernel: acpiphp: Slot [3-1] registered May 17 01:35:02.718226 kernel: acpiphp: Slot [4-1] registered May 17 01:35:02.718278 kernel: pci_bus 0000:00: on NUMA node 0 May 17 01:35:02.718343 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:35:02.718402 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.718460 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.718520 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:35:02.718578 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.718637 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.718695 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:35:02.718756 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.718814 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.718873 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:35:02.718931 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.718988 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.719049 kernel: pci 0000:00:01.0: BAR 14: assigned [mem 0x70000000-0x701fffff] May 17 01:35:02.719108 kernel: pci 0000:00:01.0: BAR 15: assigned [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:35:02.719169 kernel: pci 0000:00:02.0: BAR 14: assigned [mem 0x70200000-0x703fffff] May 17 01:35:02.719227 kernel: pci 0000:00:02.0: BAR 15: assigned [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:35:02.719288 kernel: pci 0000:00:03.0: BAR 14: assigned [mem 0x70400000-0x705fffff] May 17 01:35:02.719345 kernel: pci 0000:00:03.0: BAR 15: assigned [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:35:02.719402 kernel: pci 0000:00:04.0: BAR 14: assigned [mem 0x70600000-0x707fffff] May 17 01:35:02.719461 kernel: pci 0000:00:04.0: BAR 15: assigned [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:35:02.719518 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.719577 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.719637 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.719696 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.719754 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.719812 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.719871 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.719928 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.719986 kernel: pci 0000:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.720043 kernel: pci 0000:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.720112 kernel: pci 0000:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.720170 kernel: pci 0000:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.720229 kernel: pci 0000:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.720286 kernel: pci 0000:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.720344 kernel: pci 0000:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.720404 kernel: pci 0000:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.720461 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] May 17 01:35:02.720520 kernel: pci 0000:00:01.0: bridge window [mem 0x70000000-0x701fffff] May 17 01:35:02.720579 kernel: pci 0000:00:01.0: bridge window [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:35:02.720637 kernel: pci 0000:00:02.0: PCI bridge to [bus 02] May 17 01:35:02.720694 kernel: pci 0000:00:02.0: bridge window [mem 0x70200000-0x703fffff] May 17 01:35:02.720752 kernel: pci 0000:00:02.0: bridge window [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:35:02.720811 kernel: pci 0000:00:03.0: PCI bridge to [bus 03] May 17 01:35:02.720870 kernel: pci 0000:00:03.0: bridge window [mem 0x70400000-0x705fffff] May 17 01:35:02.720929 kernel: pci 0000:00:03.0: bridge window [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:35:02.720987 kernel: pci 0000:00:04.0: PCI bridge to [bus 04] May 17 01:35:02.721048 kernel: pci 0000:00:04.0: bridge window [mem 0x70600000-0x707fffff] May 17 01:35:02.721111 kernel: pci 0000:00:04.0: bridge window [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:35:02.721167 kernel: pci_bus 0000:00: resource 4 [mem 0x70000000-0x7fffffff window] May 17 01:35:02.721221 kernel: pci_bus 0000:00: resource 5 [mem 0x3c0000000000-0x3fffdfffffff window] May 17 01:35:02.721284 kernel: pci_bus 0000:01: resource 1 [mem 0x70000000-0x701fffff] May 17 01:35:02.721340 kernel: pci_bus 0000:01: resource 2 [mem 0x3c0000000000-0x3c00001fffff 64bit pref] May 17 01:35:02.721402 kernel: pci_bus 0000:02: resource 1 [mem 0x70200000-0x703fffff] May 17 01:35:02.721458 kernel: pci_bus 0000:02: resource 2 [mem 0x3c0000200000-0x3c00003fffff 64bit pref] May 17 01:35:02.721528 kernel: pci_bus 0000:03: resource 1 [mem 0x70400000-0x705fffff] May 17 01:35:02.721584 kernel: pci_bus 0000:03: resource 2 [mem 0x3c0000400000-0x3c00005fffff 64bit pref] May 17 01:35:02.721645 kernel: pci_bus 0000:04: resource 1 [mem 0x70600000-0x707fffff] May 17 01:35:02.721699 kernel: pci_bus 0000:04: resource 2 [mem 0x3c0000600000-0x3c00007fffff 64bit pref] May 17 01:35:02.721709 kernel: ACPI: PCI Root Bridge [PCI7] (domain 0005 [bus 00-ff]) May 17 01:35:02.721774 kernel: acpi PNP0A08:02: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.721831 kernel: acpi PNP0A08:02: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.721890 kernel: acpi PNP0A08:02: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.721946 kernel: acpi PNP0A08:02: MCFG quirk: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.722003 kernel: acpi PNP0A08:02: ECAM area [mem 0x2ffff0000000-0x2fffffffffff] reserved by PNP0C02:00 May 17 01:35:02.722075 kernel: acpi PNP0A08:02: ECAM at [mem 0x2ffff0000000-0x2fffffffffff] for [bus 00-ff] May 17 01:35:02.722085 kernel: PCI host bridge to bus 0005:00 May 17 01:35:02.722145 kernel: pci_bus 0005:00: root bus resource [mem 0x30000000-0x3fffffff window] May 17 01:35:02.722198 kernel: pci_bus 0005:00: root bus resource [mem 0x2c0000000000-0x2fffdfffffff window] May 17 01:35:02.722248 kernel: pci_bus 0005:00: root bus resource [bus 00-ff] May 17 01:35:02.722313 kernel: pci 0005:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:35:02.722377 kernel: pci 0005:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:35:02.722434 kernel: pci 0005:00:01.0: supports D1 D2 May 17 01:35:02.722493 kernel: pci 0005:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.722557 kernel: pci 0005:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:35:02.722618 kernel: pci 0005:00:03.0: supports D1 D2 May 17 01:35:02.722675 kernel: pci 0005:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.722741 kernel: pci 0005:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:35:02.722799 kernel: pci 0005:00:05.0: supports D1 D2 May 17 01:35:02.722857 kernel: pci 0005:00:05.0: PME# supported from D0 D1 D3hot May 17 01:35:02.722920 kernel: pci 0005:00:07.0: [1def:e117] type 01 class 0x060400 May 17 01:35:02.722979 kernel: pci 0005:00:07.0: supports D1 D2 May 17 01:35:02.723039 kernel: pci 0005:00:07.0: PME# supported from D0 D1 D3hot May 17 01:35:02.723051 kernel: acpiphp: Slot [1-2] registered May 17 01:35:02.723058 kernel: acpiphp: Slot [2-2] registered May 17 01:35:02.723124 kernel: pci 0005:03:00.0: [144d:a808] type 00 class 0x010802 May 17 01:35:02.723186 kernel: pci 0005:03:00.0: reg 0x10: [mem 0x30110000-0x30113fff 64bit] May 17 01:35:02.723248 kernel: pci 0005:03:00.0: reg 0x30: [mem 0x30100000-0x3010ffff pref] May 17 01:35:02.723318 kernel: pci 0005:04:00.0: [144d:a808] type 00 class 0x010802 May 17 01:35:02.723381 kernel: pci 0005:04:00.0: reg 0x10: [mem 0x30010000-0x30013fff 64bit] May 17 01:35:02.723442 kernel: pci 0005:04:00.0: reg 0x30: [mem 0x30000000-0x3000ffff pref] May 17 01:35:02.723494 kernel: pci_bus 0005:00: on NUMA node 0 May 17 01:35:02.723555 kernel: pci 0005:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:35:02.723614 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.723673 kernel: pci 0005:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.723734 kernel: pci 0005:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:35:02.723793 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.723854 kernel: pci 0005:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.723911 kernel: pci 0005:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:35:02.723970 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.724027 kernel: pci 0005:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 01:35:02.724091 kernel: pci 0005:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:35:02.724150 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.724209 kernel: pci 0005:00:07.0: bridge window [mem 0x00100000-0x001fffff] to [bus 04] add_size 100000 add_align 100000 May 17 01:35:02.724270 kernel: pci 0005:00:01.0: BAR 14: assigned [mem 0x30000000-0x301fffff] May 17 01:35:02.724328 kernel: pci 0005:00:01.0: BAR 15: assigned [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:35:02.724388 kernel: pci 0005:00:03.0: BAR 14: assigned [mem 0x30200000-0x303fffff] May 17 01:35:02.724445 kernel: pci 0005:00:03.0: BAR 15: assigned [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:35:02.724507 kernel: pci 0005:00:05.0: BAR 14: assigned [mem 0x30400000-0x305fffff] May 17 01:35:02.724564 kernel: pci 0005:00:05.0: BAR 15: assigned [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:35:02.724625 kernel: pci 0005:00:07.0: BAR 14: assigned [mem 0x30600000-0x307fffff] May 17 01:35:02.724683 kernel: pci 0005:00:07.0: BAR 15: assigned [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:35:02.724740 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.724799 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.724856 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.724915 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.724971 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.725028 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.725091 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.725148 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.725207 kernel: pci 0005:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.725264 kernel: pci 0005:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.725322 kernel: pci 0005:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.725379 kernel: pci 0005:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.725436 kernel: pci 0005:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.725493 kernel: pci 0005:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.725550 kernel: pci 0005:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.725609 kernel: pci 0005:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.725667 kernel: pci 0005:00:01.0: PCI bridge to [bus 01] May 17 01:35:02.725726 kernel: pci 0005:00:01.0: bridge window [mem 0x30000000-0x301fffff] May 17 01:35:02.725784 kernel: pci 0005:00:01.0: bridge window [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:35:02.725842 kernel: pci 0005:00:03.0: PCI bridge to [bus 02] May 17 01:35:02.725899 kernel: pci 0005:00:03.0: bridge window [mem 0x30200000-0x303fffff] May 17 01:35:02.725956 kernel: pci 0005:00:03.0: bridge window [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:35:02.726020 kernel: pci 0005:03:00.0: BAR 6: assigned [mem 0x30400000-0x3040ffff pref] May 17 01:35:02.726083 kernel: pci 0005:03:00.0: BAR 0: assigned [mem 0x30410000-0x30413fff 64bit] May 17 01:35:02.726141 kernel: pci 0005:00:05.0: PCI bridge to [bus 03] May 17 01:35:02.726199 kernel: pci 0005:00:05.0: bridge window [mem 0x30400000-0x305fffff] May 17 01:35:02.726258 kernel: pci 0005:00:05.0: bridge window [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:35:02.726318 kernel: pci 0005:04:00.0: BAR 6: assigned [mem 0x30600000-0x3060ffff pref] May 17 01:35:02.726379 kernel: pci 0005:04:00.0: BAR 0: assigned [mem 0x30610000-0x30613fff 64bit] May 17 01:35:02.726438 kernel: pci 0005:00:07.0: PCI bridge to [bus 04] May 17 01:35:02.726496 kernel: pci 0005:00:07.0: bridge window [mem 0x30600000-0x307fffff] May 17 01:35:02.726554 kernel: pci 0005:00:07.0: bridge window [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:35:02.726606 kernel: pci_bus 0005:00: resource 4 [mem 0x30000000-0x3fffffff window] May 17 01:35:02.726659 kernel: pci_bus 0005:00: resource 5 [mem 0x2c0000000000-0x2fffdfffffff window] May 17 01:35:02.726721 kernel: pci_bus 0005:01: resource 1 [mem 0x30000000-0x301fffff] May 17 01:35:02.726779 kernel: pci_bus 0005:01: resource 2 [mem 0x2c0000000000-0x2c00001fffff 64bit pref] May 17 01:35:02.726847 kernel: pci_bus 0005:02: resource 1 [mem 0x30200000-0x303fffff] May 17 01:35:02.726903 kernel: pci_bus 0005:02: resource 2 [mem 0x2c0000200000-0x2c00003fffff 64bit pref] May 17 01:35:02.726965 kernel: pci_bus 0005:03: resource 1 [mem 0x30400000-0x305fffff] May 17 01:35:02.727018 kernel: pci_bus 0005:03: resource 2 [mem 0x2c0000400000-0x2c00005fffff 64bit pref] May 17 01:35:02.727084 kernel: pci_bus 0005:04: resource 1 [mem 0x30600000-0x307fffff] May 17 01:35:02.727141 kernel: pci_bus 0005:04: resource 2 [mem 0x2c0000600000-0x2c00007fffff 64bit pref] May 17 01:35:02.727150 kernel: ACPI: PCI Root Bridge [PCI5] (domain 0003 [bus 00-ff]) May 17 01:35:02.727213 kernel: acpi PNP0A08:03: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.727270 kernel: acpi PNP0A08:03: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.727326 kernel: acpi PNP0A08:03: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.727381 kernel: acpi PNP0A08:03: MCFG quirk: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.727439 kernel: acpi PNP0A08:03: ECAM area [mem 0x27fff0000000-0x27ffffffffff] reserved by PNP0C02:00 May 17 01:35:02.727494 kernel: acpi PNP0A08:03: ECAM at [mem 0x27fff0000000-0x27ffffffffff] for [bus 00-ff] May 17 01:35:02.727503 kernel: PCI host bridge to bus 0003:00 May 17 01:35:02.727562 kernel: pci_bus 0003:00: root bus resource [mem 0x10000000-0x1fffffff window] May 17 01:35:02.727614 kernel: pci_bus 0003:00: root bus resource [mem 0x240000000000-0x27ffdfffffff window] May 17 01:35:02.727664 kernel: pci_bus 0003:00: root bus resource [bus 00-ff] May 17 01:35:02.727729 kernel: pci 0003:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:35:02.727797 kernel: pci 0003:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:35:02.727856 kernel: pci 0003:00:01.0: supports D1 D2 May 17 01:35:02.727914 kernel: pci 0003:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.727980 kernel: pci 0003:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:35:02.728038 kernel: pci 0003:00:03.0: supports D1 D2 May 17 01:35:02.728100 kernel: pci 0003:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.728167 kernel: pci 0003:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:35:02.728227 kernel: pci 0003:00:05.0: supports D1 D2 May 17 01:35:02.728286 kernel: pci 0003:00:05.0: PME# supported from D0 D1 D3hot May 17 01:35:02.728295 kernel: acpiphp: Slot [1-3] registered May 17 01:35:02.728302 kernel: acpiphp: Slot [2-3] registered May 17 01:35:02.728366 kernel: pci 0003:03:00.0: [8086:1521] type 00 class 0x020000 May 17 01:35:02.728429 kernel: pci 0003:03:00.0: reg 0x10: [mem 0x10020000-0x1003ffff] May 17 01:35:02.728489 kernel: pci 0003:03:00.0: reg 0x18: [io 0x0020-0x003f] May 17 01:35:02.728550 kernel: pci 0003:03:00.0: reg 0x1c: [mem 0x10044000-0x10047fff] May 17 01:35:02.728609 kernel: pci 0003:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:35:02.728669 kernel: pci 0003:03:00.0: reg 0x184: [mem 0x240000060000-0x240000063fff 64bit pref] May 17 01:35:02.728729 kernel: pci 0003:03:00.0: VF(n) BAR0 space: [mem 0x240000060000-0x24000007ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:35:02.728789 kernel: pci 0003:03:00.0: reg 0x190: [mem 0x240000040000-0x240000043fff 64bit pref] May 17 01:35:02.728852 kernel: pci 0003:03:00.0: VF(n) BAR3 space: [mem 0x240000040000-0x24000005ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 01:35:02.728912 kernel: pci 0003:03:00.0: 8.000 Gb/s available PCIe bandwidth, limited by 5.0 GT/s PCIe x2 link at 0003:00:05.0 (capable of 16.000 Gb/s with 5.0 GT/s PCIe x4 link) May 17 01:35:02.728979 kernel: pci 0003:03:00.1: [8086:1521] type 00 class 0x020000 May 17 01:35:02.729041 kernel: pci 0003:03:00.1: reg 0x10: [mem 0x10000000-0x1001ffff] May 17 01:35:02.729116 kernel: pci 0003:03:00.1: reg 0x18: [io 0x0000-0x001f] May 17 01:35:02.729179 kernel: pci 0003:03:00.1: reg 0x1c: [mem 0x10040000-0x10043fff] May 17 01:35:02.729243 kernel: pci 0003:03:00.1: PME# supported from D0 D3hot D3cold May 17 01:35:02.729305 kernel: pci 0003:03:00.1: reg 0x184: [mem 0x240000020000-0x240000023fff 64bit pref] May 17 01:35:02.729367 kernel: pci 0003:03:00.1: VF(n) BAR0 space: [mem 0x240000020000-0x24000003ffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:35:02.729427 kernel: pci 0003:03:00.1: reg 0x190: [mem 0x240000000000-0x240000003fff 64bit pref] May 17 01:35:02.729490 kernel: pci 0003:03:00.1: VF(n) BAR3 space: [mem 0x240000000000-0x24000001ffff 64bit pref] (contains BAR3 for 8 VFs) May 17 01:35:02.729543 kernel: pci_bus 0003:00: on NUMA node 0 May 17 01:35:02.729602 kernel: pci 0003:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:35:02.729661 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.729718 kernel: pci 0003:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.729776 kernel: pci 0003:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:35:02.729834 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.729894 kernel: pci 0003:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.729953 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03-04] add_size 300000 add_align 100000 May 17 01:35:02.730011 kernel: pci 0003:00:05.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03-04] add_size 100000 add_align 100000 May 17 01:35:02.730213 kernel: pci 0003:00:01.0: BAR 14: assigned [mem 0x10000000-0x101fffff] May 17 01:35:02.730274 kernel: pci 0003:00:01.0: BAR 15: assigned [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:35:02.730333 kernel: pci 0003:00:03.0: BAR 14: assigned [mem 0x10200000-0x103fffff] May 17 01:35:02.730389 kernel: pci 0003:00:03.0: BAR 15: assigned [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:35:02.730448 kernel: pci 0003:00:05.0: BAR 14: assigned [mem 0x10400000-0x105fffff] May 17 01:35:02.730504 kernel: pci 0003:00:05.0: BAR 15: assigned [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:35:02.730560 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.730617 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.730673 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.730729 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.730795 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.730852 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.730910 kernel: pci 0003:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.730968 kernel: pci 0003:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.731024 kernel: pci 0003:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.731083 kernel: pci 0003:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.731140 kernel: pci 0003:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.731196 kernel: pci 0003:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.731253 kernel: pci 0003:00:01.0: PCI bridge to [bus 01] May 17 01:35:02.731309 kernel: pci 0003:00:01.0: bridge window [mem 0x10000000-0x101fffff] May 17 01:35:02.731365 kernel: pci 0003:00:01.0: bridge window [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:35:02.731423 kernel: pci 0003:00:03.0: PCI bridge to [bus 02] May 17 01:35:02.731480 kernel: pci 0003:00:03.0: bridge window [mem 0x10200000-0x103fffff] May 17 01:35:02.731537 kernel: pci 0003:00:03.0: bridge window [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:35:02.731597 kernel: pci 0003:03:00.0: BAR 0: assigned [mem 0x10400000-0x1041ffff] May 17 01:35:02.731657 kernel: pci 0003:03:00.1: BAR 0: assigned [mem 0x10420000-0x1043ffff] May 17 01:35:02.731715 kernel: pci 0003:03:00.0: BAR 3: assigned [mem 0x10440000-0x10443fff] May 17 01:35:02.731776 kernel: pci 0003:03:00.0: BAR 7: assigned [mem 0x240000400000-0x24000041ffff 64bit pref] May 17 01:35:02.731837 kernel: pci 0003:03:00.0: BAR 10: assigned [mem 0x240000420000-0x24000043ffff 64bit pref] May 17 01:35:02.731897 kernel: pci 0003:03:00.1: BAR 3: assigned [mem 0x10444000-0x10447fff] May 17 01:35:02.731957 kernel: pci 0003:03:00.1: BAR 7: assigned [mem 0x240000440000-0x24000045ffff 64bit pref] May 17 01:35:02.732018 kernel: pci 0003:03:00.1: BAR 10: assigned [mem 0x240000460000-0x24000047ffff 64bit pref] May 17 01:35:02.732081 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 01:35:02.732141 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 01:35:02.732200 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 01:35:02.732260 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 01:35:02.732319 kernel: pci 0003:03:00.0: BAR 2: no space for [io size 0x0020] May 17 01:35:02.732378 kernel: pci 0003:03:00.0: BAR 2: failed to assign [io size 0x0020] May 17 01:35:02.732436 kernel: pci 0003:03:00.1: BAR 2: no space for [io size 0x0020] May 17 01:35:02.732495 kernel: pci 0003:03:00.1: BAR 2: failed to assign [io size 0x0020] May 17 01:35:02.732552 kernel: pci 0003:00:05.0: PCI bridge to [bus 03-04] May 17 01:35:02.732611 kernel: pci 0003:00:05.0: bridge window [mem 0x10400000-0x105fffff] May 17 01:35:02.732669 kernel: pci 0003:00:05.0: bridge window [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:35:02.732722 kernel: pci_bus 0003:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:35:02.732772 kernel: pci_bus 0003:00: resource 4 [mem 0x10000000-0x1fffffff window] May 17 01:35:02.732823 kernel: pci_bus 0003:00: resource 5 [mem 0x240000000000-0x27ffdfffffff window] May 17 01:35:02.732895 kernel: pci_bus 0003:01: resource 1 [mem 0x10000000-0x101fffff] May 17 01:35:02.732952 kernel: pci_bus 0003:01: resource 2 [mem 0x240000000000-0x2400001fffff 64bit pref] May 17 01:35:02.733014 kernel: pci_bus 0003:02: resource 1 [mem 0x10200000-0x103fffff] May 17 01:35:02.733071 kernel: pci_bus 0003:02: resource 2 [mem 0x240000200000-0x2400003fffff 64bit pref] May 17 01:35:02.733134 kernel: pci_bus 0003:03: resource 1 [mem 0x10400000-0x105fffff] May 17 01:35:02.733188 kernel: pci_bus 0003:03: resource 2 [mem 0x240000400000-0x2400006fffff 64bit pref] May 17 01:35:02.733198 kernel: ACPI: PCI Root Bridge [PCI0] (domain 000c [bus 00-ff]) May 17 01:35:02.733260 kernel: acpi PNP0A08:04: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.733318 kernel: acpi PNP0A08:04: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.733374 kernel: acpi PNP0A08:04: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.733429 kernel: acpi PNP0A08:04: MCFG quirk: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.733483 kernel: acpi PNP0A08:04: ECAM area [mem 0x33fff0000000-0x33ffffffffff] reserved by PNP0C02:00 May 17 01:35:02.733539 kernel: acpi PNP0A08:04: ECAM at [mem 0x33fff0000000-0x33ffffffffff] for [bus 00-ff] May 17 01:35:02.733548 kernel: PCI host bridge to bus 000c:00 May 17 01:35:02.733606 kernel: pci_bus 000c:00: root bus resource [mem 0x40000000-0x4fffffff window] May 17 01:35:02.733660 kernel: pci_bus 000c:00: root bus resource [mem 0x300000000000-0x33ffdfffffff window] May 17 01:35:02.733710 kernel: pci_bus 000c:00: root bus resource [bus 00-ff] May 17 01:35:02.733775 kernel: pci 000c:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:35:02.733846 kernel: pci 000c:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:35:02.733908 kernel: pci 000c:00:01.0: enabling Extended Tags May 17 01:35:02.733970 kernel: pci 000c:00:01.0: supports D1 D2 May 17 01:35:02.734030 kernel: pci 000c:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.734199 kernel: pci 000c:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:35:02.734262 kernel: pci 000c:00:02.0: supports D1 D2 May 17 01:35:02.734320 kernel: pci 000c:00:02.0: PME# supported from D0 D1 D3hot May 17 01:35:02.734385 kernel: pci 000c:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:35:02.734443 kernel: pci 000c:00:03.0: supports D1 D2 May 17 01:35:02.734499 kernel: pci 000c:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.734563 kernel: pci 000c:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:35:02.734624 kernel: pci 000c:00:04.0: supports D1 D2 May 17 01:35:02.734681 kernel: pci 000c:00:04.0: PME# supported from D0 D1 D3hot May 17 01:35:02.734690 kernel: acpiphp: Slot [1-4] registered May 17 01:35:02.734698 kernel: acpiphp: Slot [2-4] registered May 17 01:35:02.734705 kernel: acpiphp: Slot [3-2] registered May 17 01:35:02.734712 kernel: acpiphp: Slot [4-2] registered May 17 01:35:02.734763 kernel: pci_bus 000c:00: on NUMA node 0 May 17 01:35:02.734822 kernel: pci 000c:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:35:02.734881 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.734939 kernel: pci 000c:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.735013 kernel: pci 000c:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:35:02.735075 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.735133 kernel: pci 000c:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.735191 kernel: pci 000c:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:35:02.735249 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.735309 kernel: pci 000c:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.735367 kernel: pci 000c:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:35:02.735425 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.735482 kernel: pci 000c:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.735540 kernel: pci 000c:00:01.0: BAR 14: assigned [mem 0x40000000-0x401fffff] May 17 01:35:02.735597 kernel: pci 000c:00:01.0: BAR 15: assigned [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:35:02.735654 kernel: pci 000c:00:02.0: BAR 14: assigned [mem 0x40200000-0x403fffff] May 17 01:35:02.735713 kernel: pci 000c:00:02.0: BAR 15: assigned [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:35:02.735770 kernel: pci 000c:00:03.0: BAR 14: assigned [mem 0x40400000-0x405fffff] May 17 01:35:02.735827 kernel: pci 000c:00:03.0: BAR 15: assigned [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:35:02.735884 kernel: pci 000c:00:04.0: BAR 14: assigned [mem 0x40600000-0x407fffff] May 17 01:35:02.735941 kernel: pci 000c:00:04.0: BAR 15: assigned [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:35:02.735998 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736057 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736118 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736175 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736233 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736290 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736347 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736406 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736462 kernel: pci 000c:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736520 kernel: pci 000c:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736577 kernel: pci 000c:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736636 kernel: pci 000c:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736694 kernel: pci 000c:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736750 kernel: pci 000c:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736807 kernel: pci 000c:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.736864 kernel: pci 000c:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.736921 kernel: pci 000c:00:01.0: PCI bridge to [bus 01] May 17 01:35:02.736978 kernel: pci 000c:00:01.0: bridge window [mem 0x40000000-0x401fffff] May 17 01:35:02.737035 kernel: pci 000c:00:01.0: bridge window [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:35:02.737096 kernel: pci 000c:00:02.0: PCI bridge to [bus 02] May 17 01:35:02.737154 kernel: pci 000c:00:02.0: bridge window [mem 0x40200000-0x403fffff] May 17 01:35:02.737211 kernel: pci 000c:00:02.0: bridge window [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:35:02.737268 kernel: pci 000c:00:03.0: PCI bridge to [bus 03] May 17 01:35:02.737326 kernel: pci 000c:00:03.0: bridge window [mem 0x40400000-0x405fffff] May 17 01:35:02.737383 kernel: pci 000c:00:03.0: bridge window [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:35:02.737443 kernel: pci 000c:00:04.0: PCI bridge to [bus 04] May 17 01:35:02.737499 kernel: pci 000c:00:04.0: bridge window [mem 0x40600000-0x407fffff] May 17 01:35:02.737557 kernel: pci 000c:00:04.0: bridge window [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:35:02.737609 kernel: pci_bus 000c:00: resource 4 [mem 0x40000000-0x4fffffff window] May 17 01:35:02.737660 kernel: pci_bus 000c:00: resource 5 [mem 0x300000000000-0x33ffdfffffff window] May 17 01:35:02.737724 kernel: pci_bus 000c:01: resource 1 [mem 0x40000000-0x401fffff] May 17 01:35:02.737780 kernel: pci_bus 000c:01: resource 2 [mem 0x300000000000-0x3000001fffff 64bit pref] May 17 01:35:02.737843 kernel: pci_bus 000c:02: resource 1 [mem 0x40200000-0x403fffff] May 17 01:35:02.737899 kernel: pci_bus 000c:02: resource 2 [mem 0x300000200000-0x3000003fffff 64bit pref] May 17 01:35:02.737969 kernel: pci_bus 000c:03: resource 1 [mem 0x40400000-0x405fffff] May 17 01:35:02.738024 kernel: pci_bus 000c:03: resource 2 [mem 0x300000400000-0x3000005fffff 64bit pref] May 17 01:35:02.738088 kernel: pci_bus 000c:04: resource 1 [mem 0x40600000-0x407fffff] May 17 01:35:02.738143 kernel: pci_bus 000c:04: resource 2 [mem 0x300000600000-0x3000007fffff 64bit pref] May 17 01:35:02.738154 kernel: ACPI: PCI Root Bridge [PCI4] (domain 0002 [bus 00-ff]) May 17 01:35:02.738218 kernel: acpi PNP0A08:05: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.738276 kernel: acpi PNP0A08:05: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.738332 kernel: acpi PNP0A08:05: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.738389 kernel: acpi PNP0A08:05: MCFG quirk: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.738444 kernel: acpi PNP0A08:05: ECAM area [mem 0x23fff0000000-0x23ffffffffff] reserved by PNP0C02:00 May 17 01:35:02.738500 kernel: acpi PNP0A08:05: ECAM at [mem 0x23fff0000000-0x23ffffffffff] for [bus 00-ff] May 17 01:35:02.738511 kernel: PCI host bridge to bus 0002:00 May 17 01:35:02.738569 kernel: pci_bus 0002:00: root bus resource [mem 0x00800000-0x0fffffff window] May 17 01:35:02.738622 kernel: pci_bus 0002:00: root bus resource [mem 0x200000000000-0x23ffdfffffff window] May 17 01:35:02.738672 kernel: pci_bus 0002:00: root bus resource [bus 00-ff] May 17 01:35:02.738736 kernel: pci 0002:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:35:02.738802 kernel: pci 0002:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:35:02.738863 kernel: pci 0002:00:01.0: supports D1 D2 May 17 01:35:02.738921 kernel: pci 0002:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.738987 kernel: pci 0002:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:35:02.739048 kernel: pci 0002:00:03.0: supports D1 D2 May 17 01:35:02.739106 kernel: pci 0002:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.739169 kernel: pci 0002:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:35:02.739228 kernel: pci 0002:00:05.0: supports D1 D2 May 17 01:35:02.739286 kernel: pci 0002:00:05.0: PME# supported from D0 D1 D3hot May 17 01:35:02.739350 kernel: pci 0002:00:07.0: [1def:e117] type 01 class 0x060400 May 17 01:35:02.739407 kernel: pci 0002:00:07.0: supports D1 D2 May 17 01:35:02.739466 kernel: pci 0002:00:07.0: PME# supported from D0 D1 D3hot May 17 01:35:02.739475 kernel: acpiphp: Slot [1-5] registered May 17 01:35:02.739483 kernel: acpiphp: Slot [2-5] registered May 17 01:35:02.739490 kernel: acpiphp: Slot [3-3] registered May 17 01:35:02.739497 kernel: acpiphp: Slot [4-3] registered May 17 01:35:02.739548 kernel: pci_bus 0002:00: on NUMA node 0 May 17 01:35:02.739605 kernel: pci 0002:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:35:02.739664 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.739721 kernel: pci 0002:00:01.0: bridge window [mem 0x00100000-0x000fffff] to [bus 01] add_size 200000 add_align 100000 May 17 01:35:02.739782 kernel: pci 0002:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:35:02.739840 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.739899 kernel: pci 0002:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.739959 kernel: pci 0002:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:35:02.740017 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.740077 kernel: pci 0002:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.740134 kernel: pci 0002:00:07.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:35:02.740192 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.740251 kernel: pci 0002:00:07.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.740308 kernel: pci 0002:00:01.0: BAR 14: assigned [mem 0x00800000-0x009fffff] May 17 01:35:02.740366 kernel: pci 0002:00:01.0: BAR 15: assigned [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:35:02.740423 kernel: pci 0002:00:03.0: BAR 14: assigned [mem 0x00a00000-0x00bfffff] May 17 01:35:02.740480 kernel: pci 0002:00:03.0: BAR 15: assigned [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:35:02.740537 kernel: pci 0002:00:05.0: BAR 14: assigned [mem 0x00c00000-0x00dfffff] May 17 01:35:02.740595 kernel: pci 0002:00:05.0: BAR 15: assigned [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:35:02.740654 kernel: pci 0002:00:07.0: BAR 14: assigned [mem 0x00e00000-0x00ffffff] May 17 01:35:02.740711 kernel: pci 0002:00:07.0: BAR 15: assigned [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:35:02.740769 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.740828 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.740885 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.740942 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.740999 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.741058 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.741118 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.741175 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.741233 kernel: pci 0002:00:07.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.741290 kernel: pci 0002:00:07.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.741348 kernel: pci 0002:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.741405 kernel: pci 0002:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.741463 kernel: pci 0002:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.741521 kernel: pci 0002:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.741578 kernel: pci 0002:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.741638 kernel: pci 0002:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.741695 kernel: pci 0002:00:01.0: PCI bridge to [bus 01] May 17 01:35:02.741754 kernel: pci 0002:00:01.0: bridge window [mem 0x00800000-0x009fffff] May 17 01:35:02.741812 kernel: pci 0002:00:01.0: bridge window [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:35:02.741871 kernel: pci 0002:00:03.0: PCI bridge to [bus 02] May 17 01:35:02.741928 kernel: pci 0002:00:03.0: bridge window [mem 0x00a00000-0x00bfffff] May 17 01:35:02.741987 kernel: pci 0002:00:03.0: bridge window [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:35:02.742049 kernel: pci 0002:00:05.0: PCI bridge to [bus 03] May 17 01:35:02.742108 kernel: pci 0002:00:05.0: bridge window [mem 0x00c00000-0x00dfffff] May 17 01:35:02.742167 kernel: pci 0002:00:05.0: bridge window [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:35:02.742224 kernel: pci 0002:00:07.0: PCI bridge to [bus 04] May 17 01:35:02.742283 kernel: pci 0002:00:07.0: bridge window [mem 0x00e00000-0x00ffffff] May 17 01:35:02.742340 kernel: pci 0002:00:07.0: bridge window [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:35:02.742395 kernel: pci_bus 0002:00: resource 4 [mem 0x00800000-0x0fffffff window] May 17 01:35:02.742447 kernel: pci_bus 0002:00: resource 5 [mem 0x200000000000-0x23ffdfffffff window] May 17 01:35:02.742511 kernel: pci_bus 0002:01: resource 1 [mem 0x00800000-0x009fffff] May 17 01:35:02.742568 kernel: pci_bus 0002:01: resource 2 [mem 0x200000000000-0x2000001fffff 64bit pref] May 17 01:35:02.742630 kernel: pci_bus 0002:02: resource 1 [mem 0x00a00000-0x00bfffff] May 17 01:35:02.742686 kernel: pci_bus 0002:02: resource 2 [mem 0x200000200000-0x2000003fffff 64bit pref] May 17 01:35:02.742754 kernel: pci_bus 0002:03: resource 1 [mem 0x00c00000-0x00dfffff] May 17 01:35:02.742809 kernel: pci_bus 0002:03: resource 2 [mem 0x200000400000-0x2000005fffff 64bit pref] May 17 01:35:02.742872 kernel: pci_bus 0002:04: resource 1 [mem 0x00e00000-0x00ffffff] May 17 01:35:02.742926 kernel: pci_bus 0002:04: resource 2 [mem 0x200000600000-0x2000007fffff 64bit pref] May 17 01:35:02.742936 kernel: ACPI: PCI Root Bridge [PCI2] (domain 0001 [bus 00-ff]) May 17 01:35:02.742999 kernel: acpi PNP0A08:06: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.743060 kernel: acpi PNP0A08:06: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.743120 kernel: acpi PNP0A08:06: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.743176 kernel: acpi PNP0A08:06: MCFG quirk: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.743233 kernel: acpi PNP0A08:06: ECAM area [mem 0x3bfff0000000-0x3bffffffffff] reserved by PNP0C02:00 May 17 01:35:02.743288 kernel: acpi PNP0A08:06: ECAM at [mem 0x3bfff0000000-0x3bffffffffff] for [bus 00-ff] May 17 01:35:02.743298 kernel: PCI host bridge to bus 0001:00 May 17 01:35:02.743359 kernel: pci_bus 0001:00: root bus resource [mem 0x60000000-0x6fffffff window] May 17 01:35:02.743414 kernel: pci_bus 0001:00: root bus resource [mem 0x380000000000-0x3bffdfffffff window] May 17 01:35:02.743465 kernel: pci_bus 0001:00: root bus resource [bus 00-ff] May 17 01:35:02.743531 kernel: pci 0001:00:00.0: [1def:e100] type 00 class 0x060000 May 17 01:35:02.743599 kernel: pci 0001:00:01.0: [1def:e101] type 01 class 0x060400 May 17 01:35:02.743659 kernel: pci 0001:00:01.0: enabling Extended Tags May 17 01:35:02.743717 kernel: pci 0001:00:01.0: supports D1 D2 May 17 01:35:02.743777 kernel: pci 0001:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.743842 kernel: pci 0001:00:02.0: [1def:e102] type 01 class 0x060400 May 17 01:35:02.743900 kernel: pci 0001:00:02.0: supports D1 D2 May 17 01:35:02.743958 kernel: pci 0001:00:02.0: PME# supported from D0 D1 D3hot May 17 01:35:02.744023 kernel: pci 0001:00:03.0: [1def:e103] type 01 class 0x060400 May 17 01:35:02.744200 kernel: pci 0001:00:03.0: supports D1 D2 May 17 01:35:02.744260 kernel: pci 0001:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.744328 kernel: pci 0001:00:04.0: [1def:e104] type 01 class 0x060400 May 17 01:35:02.744386 kernel: pci 0001:00:04.0: supports D1 D2 May 17 01:35:02.744443 kernel: pci 0001:00:04.0: PME# supported from D0 D1 D3hot May 17 01:35:02.744452 kernel: acpiphp: Slot [1-6] registered May 17 01:35:02.744517 kernel: pci 0001:01:00.0: [15b3:1015] type 00 class 0x020000 May 17 01:35:02.744577 kernel: pci 0001:01:00.0: reg 0x10: [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 01:35:02.744637 kernel: pci 0001:01:00.0: reg 0x30: [mem 0x60100000-0x601fffff pref] May 17 01:35:02.744697 kernel: pci 0001:01:00.0: PME# supported from D3cold May 17 01:35:02.744757 kernel: pci 0001:01:00.0: reg 0x1a4: [mem 0x380004800000-0x3800048fffff 64bit pref] May 17 01:35:02.744818 kernel: pci 0001:01:00.0: VF(n) BAR0 space: [mem 0x380004800000-0x380004ffffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:35:02.744878 kernel: pci 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:35:02.744945 kernel: pci 0001:01:00.1: [15b3:1015] type 00 class 0x020000 May 17 01:35:02.745005 kernel: pci 0001:01:00.1: reg 0x10: [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 01:35:02.745068 kernel: pci 0001:01:00.1: reg 0x30: [mem 0x60000000-0x600fffff pref] May 17 01:35:02.745129 kernel: pci 0001:01:00.1: PME# supported from D3cold May 17 01:35:02.745188 kernel: pci 0001:01:00.1: reg 0x1a4: [mem 0x380004000000-0x3800040fffff 64bit pref] May 17 01:35:02.745248 kernel: pci 0001:01:00.1: VF(n) BAR0 space: [mem 0x380004000000-0x3800047fffff 64bit pref] (contains BAR0 for 8 VFs) May 17 01:35:02.745258 kernel: acpiphp: Slot [2-6] registered May 17 01:35:02.745265 kernel: acpiphp: Slot [3-4] registered May 17 01:35:02.745272 kernel: acpiphp: Slot [4-4] registered May 17 01:35:02.745324 kernel: pci_bus 0001:00: on NUMA node 0 May 17 01:35:02.745382 kernel: pci 0001:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 May 17 01:35:02.745442 kernel: pci 0001:00:01.0: bridge window [mem 0x02000000-0x05ffffff 64bit pref] to [bus 01] add_size 2000000 add_align 2000000 May 17 01:35:02.745501 kernel: pci 0001:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 May 17 01:35:02.745558 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.745616 kernel: pci 0001:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 02] add_size 200000 add_align 100000 May 17 01:35:02.745674 kernel: pci 0001:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:35:02.745731 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.745790 kernel: pci 0001:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.745850 kernel: pci 0001:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:35:02.745907 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.745964 kernel: pci 0001:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.746022 kernel: pci 0001:00:01.0: BAR 15: assigned [mem 0x380000000000-0x380005ffffff 64bit pref] May 17 01:35:02.746083 kernel: pci 0001:00:01.0: BAR 14: assigned [mem 0x60000000-0x601fffff] May 17 01:35:02.746141 kernel: pci 0001:00:02.0: BAR 14: assigned [mem 0x60200000-0x603fffff] May 17 01:35:02.746198 kernel: pci 0001:00:02.0: BAR 15: assigned [mem 0x380006000000-0x3800061fffff 64bit pref] May 17 01:35:02.746258 kernel: pci 0001:00:03.0: BAR 14: assigned [mem 0x60400000-0x605fffff] May 17 01:35:02.746317 kernel: pci 0001:00:03.0: BAR 15: assigned [mem 0x380006200000-0x3800063fffff 64bit pref] May 17 01:35:02.746374 kernel: pci 0001:00:04.0: BAR 14: assigned [mem 0x60600000-0x607fffff] May 17 01:35:02.746431 kernel: pci 0001:00:04.0: BAR 15: assigned [mem 0x380006400000-0x3800065fffff 64bit pref] May 17 01:35:02.746489 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.746545 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.746602 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.746659 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.746719 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.746776 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.746832 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.746890 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.746949 kernel: pci 0001:00:04.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.747006 kernel: pci 0001:00:04.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.747066 kernel: pci 0001:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.747125 kernel: pci 0001:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.747181 kernel: pci 0001:00:02.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.747241 kernel: pci 0001:00:02.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.747298 kernel: pci 0001:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.747354 kernel: pci 0001:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.747415 kernel: pci 0001:01:00.0: BAR 0: assigned [mem 0x380000000000-0x380001ffffff 64bit pref] May 17 01:35:02.747474 kernel: pci 0001:01:00.1: BAR 0: assigned [mem 0x380002000000-0x380003ffffff 64bit pref] May 17 01:35:02.747534 kernel: pci 0001:01:00.0: BAR 6: assigned [mem 0x60000000-0x600fffff pref] May 17 01:35:02.747594 kernel: pci 0001:01:00.0: BAR 7: assigned [mem 0x380004000000-0x3800047fffff 64bit pref] May 17 01:35:02.747654 kernel: pci 0001:01:00.1: BAR 6: assigned [mem 0x60100000-0x601fffff pref] May 17 01:35:02.747714 kernel: pci 0001:01:00.1: BAR 7: assigned [mem 0x380004800000-0x380004ffffff 64bit pref] May 17 01:35:02.747772 kernel: pci 0001:00:01.0: PCI bridge to [bus 01] May 17 01:35:02.747829 kernel: pci 0001:00:01.0: bridge window [mem 0x60000000-0x601fffff] May 17 01:35:02.747888 kernel: pci 0001:00:01.0: bridge window [mem 0x380000000000-0x380005ffffff 64bit pref] May 17 01:35:02.747944 kernel: pci 0001:00:02.0: PCI bridge to [bus 02] May 17 01:35:02.748001 kernel: pci 0001:00:02.0: bridge window [mem 0x60200000-0x603fffff] May 17 01:35:02.748062 kernel: pci 0001:00:02.0: bridge window [mem 0x380006000000-0x3800061fffff 64bit pref] May 17 01:35:02.748120 kernel: pci 0001:00:03.0: PCI bridge to [bus 03] May 17 01:35:02.748178 kernel: pci 0001:00:03.0: bridge window [mem 0x60400000-0x605fffff] May 17 01:35:02.748235 kernel: pci 0001:00:03.0: bridge window [mem 0x380006200000-0x3800063fffff 64bit pref] May 17 01:35:02.748292 kernel: pci 0001:00:04.0: PCI bridge to [bus 04] May 17 01:35:02.748349 kernel: pci 0001:00:04.0: bridge window [mem 0x60600000-0x607fffff] May 17 01:35:02.748405 kernel: pci 0001:00:04.0: bridge window [mem 0x380006400000-0x3800065fffff 64bit pref] May 17 01:35:02.748459 kernel: pci_bus 0001:00: resource 4 [mem 0x60000000-0x6fffffff window] May 17 01:35:02.748510 kernel: pci_bus 0001:00: resource 5 [mem 0x380000000000-0x3bffdfffffff window] May 17 01:35:02.748580 kernel: pci_bus 0001:01: resource 1 [mem 0x60000000-0x601fffff] May 17 01:35:02.748634 kernel: pci_bus 0001:01: resource 2 [mem 0x380000000000-0x380005ffffff 64bit pref] May 17 01:35:02.748696 kernel: pci_bus 0001:02: resource 1 [mem 0x60200000-0x603fffff] May 17 01:35:02.748749 kernel: pci_bus 0001:02: resource 2 [mem 0x380006000000-0x3800061fffff 64bit pref] May 17 01:35:02.748811 kernel: pci_bus 0001:03: resource 1 [mem 0x60400000-0x605fffff] May 17 01:35:02.748865 kernel: pci_bus 0001:03: resource 2 [mem 0x380006200000-0x3800063fffff 64bit pref] May 17 01:35:02.748925 kernel: pci_bus 0001:04: resource 1 [mem 0x60600000-0x607fffff] May 17 01:35:02.748980 kernel: pci_bus 0001:04: resource 2 [mem 0x380006400000-0x3800065fffff 64bit pref] May 17 01:35:02.748989 kernel: ACPI: PCI Root Bridge [PCI6] (domain 0004 [bus 00-ff]) May 17 01:35:02.749054 kernel: acpi PNP0A08:07: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 17 01:35:02.749115 kernel: acpi PNP0A08:07: _OSC: platform does not support [PCIeHotplug PME LTR] May 17 01:35:02.749170 kernel: acpi PNP0A08:07: _OSC: OS now controls [AER PCIeCapability] May 17 01:35:02.749226 kernel: acpi PNP0A08:07: MCFG quirk: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] with pci_32b_read_ops May 17 01:35:02.749281 kernel: acpi PNP0A08:07: ECAM area [mem 0x2bfff0000000-0x2bffffffffff] reserved by PNP0C02:00 May 17 01:35:02.749337 kernel: acpi PNP0A08:07: ECAM at [mem 0x2bfff0000000-0x2bffffffffff] for [bus 00-ff] May 17 01:35:02.749346 kernel: PCI host bridge to bus 0004:00 May 17 01:35:02.749404 kernel: pci_bus 0004:00: root bus resource [mem 0x20000000-0x2fffffff window] May 17 01:35:02.749458 kernel: pci_bus 0004:00: root bus resource [mem 0x280000000000-0x2bffdfffffff window] May 17 01:35:02.749509 kernel: pci_bus 0004:00: root bus resource [bus 00-ff] May 17 01:35:02.749574 kernel: pci 0004:00:00.0: [1def:e110] type 00 class 0x060000 May 17 01:35:02.749642 kernel: pci 0004:00:01.0: [1def:e111] type 01 class 0x060400 May 17 01:35:02.749700 kernel: pci 0004:00:01.0: supports D1 D2 May 17 01:35:02.749758 kernel: pci 0004:00:01.0: PME# supported from D0 D1 D3hot May 17 01:35:02.749823 kernel: pci 0004:00:03.0: [1def:e113] type 01 class 0x060400 May 17 01:35:02.749883 kernel: pci 0004:00:03.0: supports D1 D2 May 17 01:35:02.749940 kernel: pci 0004:00:03.0: PME# supported from D0 D1 D3hot May 17 01:35:02.750004 kernel: pci 0004:00:05.0: [1def:e115] type 01 class 0x060400 May 17 01:35:02.750066 kernel: pci 0004:00:05.0: supports D1 D2 May 17 01:35:02.750124 kernel: pci 0004:00:05.0: PME# supported from D0 D1 D3hot May 17 01:35:02.750193 kernel: pci 0004:01:00.0: [1a03:1150] type 01 class 0x060400 May 17 01:35:02.750253 kernel: pci 0004:01:00.0: enabling Extended Tags May 17 01:35:02.750316 kernel: pci 0004:01:00.0: supports D1 D2 May 17 01:35:02.750375 kernel: pci 0004:01:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:35:02.750442 kernel: pci_bus 0004:02: extended config space not accessible May 17 01:35:02.750519 kernel: pci 0004:02:00.0: [1a03:2000] type 00 class 0x030000 May 17 01:35:02.750584 kernel: pci 0004:02:00.0: reg 0x10: [mem 0x20000000-0x21ffffff] May 17 01:35:02.750647 kernel: pci 0004:02:00.0: reg 0x14: [mem 0x22000000-0x2201ffff] May 17 01:35:02.750710 kernel: pci 0004:02:00.0: reg 0x18: [io 0x0000-0x007f] May 17 01:35:02.750773 kernel: pci 0004:02:00.0: BAR 0: assigned to efifb May 17 01:35:02.750837 kernel: pci 0004:02:00.0: supports D1 D2 May 17 01:35:02.750898 kernel: pci 0004:02:00.0: PME# supported from D0 D1 D2 D3hot D3cold May 17 01:35:02.750965 kernel: pci 0004:03:00.0: [1912:0014] type 00 class 0x0c0330 May 17 01:35:02.751026 kernel: pci 0004:03:00.0: reg 0x10: [mem 0x22200000-0x22201fff 64bit] May 17 01:35:02.751090 kernel: pci 0004:03:00.0: PME# supported from D0 D3hot D3cold May 17 01:35:02.751144 kernel: pci_bus 0004:00: on NUMA node 0 May 17 01:35:02.751205 kernel: pci 0004:00:01.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 01-02] add_size 200000 add_align 100000 May 17 01:35:02.751266 kernel: pci 0004:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 May 17 01:35:02.751325 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 May 17 01:35:02.751384 kernel: pci 0004:00:03.0: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 May 17 01:35:02.751443 kernel: pci 0004:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 May 17 01:35:02.751502 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.751561 kernel: pci 0004:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 May 17 01:35:02.751620 kernel: pci 0004:00:01.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 01:35:02.751680 kernel: pci 0004:00:01.0: BAR 15: assigned [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:35:02.751738 kernel: pci 0004:00:03.0: BAR 14: assigned [mem 0x23000000-0x231fffff] May 17 01:35:02.751797 kernel: pci 0004:00:03.0: BAR 15: assigned [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:35:02.751855 kernel: pci 0004:00:05.0: BAR 14: assigned [mem 0x23200000-0x233fffff] May 17 01:35:02.751913 kernel: pci 0004:00:05.0: BAR 15: assigned [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:35:02.751971 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.752029 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.752093 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.752151 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.752210 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.752267 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.752326 kernel: pci 0004:00:01.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.752384 kernel: pci 0004:00:01.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.752442 kernel: pci 0004:00:05.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.752501 kernel: pci 0004:00:05.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.752562 kernel: pci 0004:00:03.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.752622 kernel: pci 0004:00:03.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.752684 kernel: pci 0004:01:00.0: BAR 14: assigned [mem 0x20000000-0x22ffffff] May 17 01:35:02.752746 kernel: pci 0004:01:00.0: BAR 13: no space for [io size 0x1000] May 17 01:35:02.752808 kernel: pci 0004:01:00.0: BAR 13: failed to assign [io size 0x1000] May 17 01:35:02.752873 kernel: pci 0004:02:00.0: BAR 0: assigned [mem 0x20000000-0x21ffffff] May 17 01:35:02.752937 kernel: pci 0004:02:00.0: BAR 1: assigned [mem 0x22000000-0x2201ffff] May 17 01:35:02.753002 kernel: pci 0004:02:00.0: BAR 2: no space for [io size 0x0080] May 17 01:35:02.753071 kernel: pci 0004:02:00.0: BAR 2: failed to assign [io size 0x0080] May 17 01:35:02.753134 kernel: pci 0004:01:00.0: PCI bridge to [bus 02] May 17 01:35:02.753195 kernel: pci 0004:01:00.0: bridge window [mem 0x20000000-0x22ffffff] May 17 01:35:02.753255 kernel: pci 0004:00:01.0: PCI bridge to [bus 01-02] May 17 01:35:02.753313 kernel: pci 0004:00:01.0: bridge window [mem 0x20000000-0x22ffffff] May 17 01:35:02.753373 kernel: pci 0004:00:01.0: bridge window [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:35:02.753435 kernel: pci 0004:03:00.0: BAR 0: assigned [mem 0x23000000-0x23001fff 64bit] May 17 01:35:02.753494 kernel: pci 0004:00:03.0: PCI bridge to [bus 03] May 17 01:35:02.753556 kernel: pci 0004:00:03.0: bridge window [mem 0x23000000-0x231fffff] May 17 01:35:02.753616 kernel: pci 0004:00:03.0: bridge window [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:35:02.753677 kernel: pci 0004:00:05.0: PCI bridge to [bus 04] May 17 01:35:02.753733 kernel: pci 0004:00:05.0: bridge window [mem 0x23200000-0x233fffff] May 17 01:35:02.753793 kernel: pci 0004:00:05.0: bridge window [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:35:02.753847 kernel: pci_bus 0004:00: Some PCI device resources are unassigned, try booting with pci=realloc May 17 01:35:02.753901 kernel: pci_bus 0004:00: resource 4 [mem 0x20000000-0x2fffffff window] May 17 01:35:02.753955 kernel: pci_bus 0004:00: resource 5 [mem 0x280000000000-0x2bffdfffffff window] May 17 01:35:02.754019 kernel: pci_bus 0004:01: resource 1 [mem 0x20000000-0x22ffffff] May 17 01:35:02.754080 kernel: pci_bus 0004:01: resource 2 [mem 0x280000000000-0x2800001fffff 64bit pref] May 17 01:35:02.754142 kernel: pci_bus 0004:02: resource 1 [mem 0x20000000-0x22ffffff] May 17 01:35:02.754204 kernel: pci_bus 0004:03: resource 1 [mem 0x23000000-0x231fffff] May 17 01:35:02.754259 kernel: pci_bus 0004:03: resource 2 [mem 0x280000200000-0x2800003fffff 64bit pref] May 17 01:35:02.754322 kernel: pci_bus 0004:04: resource 1 [mem 0x23200000-0x233fffff] May 17 01:35:02.754377 kernel: pci_bus 0004:04: resource 2 [mem 0x280000400000-0x2800005fffff 64bit pref] May 17 01:35:02.754387 kernel: iommu: Default domain type: Translated May 17 01:35:02.754394 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 17 01:35:02.754457 kernel: pci 0004:02:00.0: vgaarb: VGA device added: decodes=io+mem,owns=none,locks=none May 17 01:35:02.754522 kernel: pci 0004:02:00.0: vgaarb: bridge control possible May 17 01:35:02.754588 kernel: pci 0004:02:00.0: vgaarb: setting as boot device (VGA legacy resources not available) May 17 01:35:02.754599 kernel: vgaarb: loaded May 17 01:35:02.754607 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 01:35:02.754614 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 01:35:02.754622 kernel: PTP clock support registered May 17 01:35:02.754629 kernel: Registered efivars operations May 17 01:35:02.754636 kernel: clocksource: Switched to clocksource arch_sys_counter May 17 01:35:02.754643 kernel: VFS: Disk quotas dquot_6.6.0 May 17 01:35:02.754651 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 01:35:02.754659 kernel: pnp: PnP ACPI init May 17 01:35:02.754727 kernel: system 00:00: [mem 0x3bfff0000000-0x3bffffffffff window] could not be reserved May 17 01:35:02.754781 kernel: system 00:00: [mem 0x3ffff0000000-0x3fffffffffff window] could not be reserved May 17 01:35:02.754834 kernel: system 00:00: [mem 0x23fff0000000-0x23ffffffffff window] could not be reserved May 17 01:35:02.754886 kernel: system 00:00: [mem 0x27fff0000000-0x27ffffffffff window] could not be reserved May 17 01:35:02.754939 kernel: system 00:00: [mem 0x2bfff0000000-0x2bffffffffff window] could not be reserved May 17 01:35:02.754991 kernel: system 00:00: [mem 0x2ffff0000000-0x2fffffffffff window] could not be reserved May 17 01:35:02.755047 kernel: system 00:00: [mem 0x33fff0000000-0x33ffffffffff window] could not be reserved May 17 01:35:02.755102 kernel: system 00:00: [mem 0x37fff0000000-0x37ffffffffff window] could not be reserved May 17 01:35:02.755111 kernel: pnp: PnP ACPI: found 1 devices May 17 01:35:02.755118 kernel: NET: Registered PF_INET protocol family May 17 01:35:02.755126 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 01:35:02.755133 kernel: tcp_listen_portaddr_hash hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 01:35:02.755141 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 01:35:02.755148 kernel: TCP established hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 01:35:02.755157 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 01:35:02.755165 kernel: TCP: Hash tables configured (established 524288 bind 65536) May 17 01:35:02.755173 kernel: UDP hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:35:02.755180 kernel: UDP-Lite hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 17 01:35:02.755187 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 01:35:02.755248 kernel: pci 0001:01:00.0: CLS mismatch (64 != 32), using 64 bytes May 17 01:35:02.755258 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 17 01:35:02.755265 kernel: kvm [1]: IPA Size Limit: 48 bits May 17 01:35:02.755272 kernel: kvm [1]: GICv3: no GICV resource entry May 17 01:35:02.755281 kernel: kvm [1]: disabling GICv2 emulation May 17 01:35:02.755289 kernel: kvm [1]: GIC system register CPU interface enabled May 17 01:35:02.755296 kernel: kvm [1]: vgic interrupt IRQ9 May 17 01:35:02.755303 kernel: kvm [1]: VHE mode initialized successfully May 17 01:35:02.755310 kernel: Initialise system trusted keyrings May 17 01:35:02.755318 kernel: workingset: timestamp_bits=39 max_order=26 bucket_order=0 May 17 01:35:02.755325 kernel: Key type asymmetric registered May 17 01:35:02.755332 kernel: Asymmetric key parser 'x509' registered May 17 01:35:02.755339 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 01:35:02.755347 kernel: io scheduler mq-deadline registered May 17 01:35:02.755355 kernel: io scheduler kyber registered May 17 01:35:02.755362 kernel: io scheduler bfq registered May 17 01:35:02.755369 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 17 01:35:02.755376 kernel: ACPI: button: Power Button [PWRB] May 17 01:35:02.755384 kernel: ACPI GTDT: found 1 SBSA generic Watchdog(s). May 17 01:35:02.755391 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 01:35:02.755458 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: option mask 0x0 May 17 01:35:02.755514 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.755572 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.755628 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 262144 entries for cmdq May 17 01:35:02.755683 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 128 entries for evtq May 17 01:35:02.755740 kernel: arm-smmu-v3 arm-smmu-v3.0.auto: allocated 256 entries for priq May 17 01:35:02.755802 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: option mask 0x0 May 17 01:35:02.755857 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.755913 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.755969 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 262144 entries for cmdq May 17 01:35:02.756023 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 128 entries for evtq May 17 01:35:02.756082 kernel: arm-smmu-v3 arm-smmu-v3.1.auto: allocated 256 entries for priq May 17 01:35:02.756145 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: option mask 0x0 May 17 01:35:02.756200 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.756258 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.756313 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 262144 entries for cmdq May 17 01:35:02.756368 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 128 entries for evtq May 17 01:35:02.756421 kernel: arm-smmu-v3 arm-smmu-v3.2.auto: allocated 256 entries for priq May 17 01:35:02.756483 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: option mask 0x0 May 17 01:35:02.756538 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.756592 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.756647 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 262144 entries for cmdq May 17 01:35:02.756700 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 128 entries for evtq May 17 01:35:02.756754 kernel: arm-smmu-v3 arm-smmu-v3.3.auto: allocated 256 entries for priq May 17 01:35:02.756821 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: option mask 0x0 May 17 01:35:02.756876 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.756929 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.756983 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 262144 entries for cmdq May 17 01:35:02.757038 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 128 entries for evtq May 17 01:35:02.757095 kernel: arm-smmu-v3 arm-smmu-v3.4.auto: allocated 256 entries for priq May 17 01:35:02.757156 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: option mask 0x0 May 17 01:35:02.757210 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.757265 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.757318 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 262144 entries for cmdq May 17 01:35:02.757372 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 128 entries for evtq May 17 01:35:02.757427 kernel: arm-smmu-v3 arm-smmu-v3.5.auto: allocated 256 entries for priq May 17 01:35:02.757488 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: option mask 0x0 May 17 01:35:02.757543 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.757597 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.757652 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 262144 entries for cmdq May 17 01:35:02.757706 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 128 entries for evtq May 17 01:35:02.757761 kernel: arm-smmu-v3 arm-smmu-v3.6.auto: allocated 256 entries for priq May 17 01:35:02.757821 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: option mask 0x0 May 17 01:35:02.757876 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: IDR0.COHACC overridden by FW configuration (false) May 17 01:35:02.757929 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: ias 48-bit, oas 48-bit (features 0x000c1eff) May 17 01:35:02.757983 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 262144 entries for cmdq May 17 01:35:02.758037 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 128 entries for evtq May 17 01:35:02.758096 kernel: arm-smmu-v3 arm-smmu-v3.7.auto: allocated 256 entries for priq May 17 01:35:02.758106 kernel: thunder_xcv, ver 1.0 May 17 01:35:02.758113 kernel: thunder_bgx, ver 1.0 May 17 01:35:02.758120 kernel: nicpf, ver 1.0 May 17 01:35:02.758128 kernel: nicvf, ver 1.0 May 17 01:35:02.758190 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 17 01:35:02.758245 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-17T01:35:01 UTC (1747445701) May 17 01:35:02.758254 kernel: efifb: probing for efifb May 17 01:35:02.758263 kernel: efifb: framebuffer at 0x20000000, using 1876k, total 1875k May 17 01:35:02.758271 kernel: efifb: mode is 800x600x32, linelength=3200, pages=1 May 17 01:35:02.758278 kernel: efifb: scrolling: redraw May 17 01:35:02.758285 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 01:35:02.758292 kernel: Console: switching to colour frame buffer device 100x37 May 17 01:35:02.758299 kernel: fb0: EFI VGA frame buffer device May 17 01:35:02.758306 kernel: SMCCC: SOC_ID: ID = jep106:0a16:0001 Revision = 0x000000a1 May 17 01:35:02.758314 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 01:35:02.758321 kernel: NET: Registered PF_INET6 protocol family May 17 01:35:02.758331 kernel: Segment Routing with IPv6 May 17 01:35:02.758338 kernel: In-situ OAM (IOAM) with IPv6 May 17 01:35:02.758345 kernel: NET: Registered PF_PACKET protocol family May 17 01:35:02.758352 kernel: Key type dns_resolver registered May 17 01:35:02.758359 kernel: registered taskstats version 1 May 17 01:35:02.758367 kernel: Loading compiled-in X.509 certificates May 17 01:35:02.758374 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 2fa973ae674d09a62938b8c6a2b9446b5340adb7' May 17 01:35:02.758381 kernel: Key type .fscrypt registered May 17 01:35:02.758388 kernel: Key type fscrypt-provisioning registered May 17 01:35:02.758395 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 01:35:02.758404 kernel: ima: Allocated hash algorithm: sha1 May 17 01:35:02.758411 kernel: ima: No architecture policies found May 17 01:35:02.758473 kernel: pcieport 000d:00:01.0: Adding to iommu group 0 May 17 01:35:02.758532 kernel: pcieport 000d:00:01.0: AER: enabled with IRQ 91 May 17 01:35:02.758592 kernel: pcieport 000d:00:02.0: Adding to iommu group 1 May 17 01:35:02.758651 kernel: pcieport 000d:00:02.0: AER: enabled with IRQ 91 May 17 01:35:02.758711 kernel: pcieport 000d:00:03.0: Adding to iommu group 2 May 17 01:35:02.758770 kernel: pcieport 000d:00:03.0: AER: enabled with IRQ 91 May 17 01:35:02.758831 kernel: pcieport 000d:00:04.0: Adding to iommu group 3 May 17 01:35:02.758890 kernel: pcieport 000d:00:04.0: AER: enabled with IRQ 91 May 17 01:35:02.758951 kernel: pcieport 0000:00:01.0: Adding to iommu group 4 May 17 01:35:02.759009 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 92 May 17 01:35:02.759072 kernel: pcieport 0000:00:02.0: Adding to iommu group 5 May 17 01:35:02.759130 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 92 May 17 01:35:02.759193 kernel: pcieport 0000:00:03.0: Adding to iommu group 6 May 17 01:35:02.759250 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 92 May 17 01:35:02.759312 kernel: pcieport 0000:00:04.0: Adding to iommu group 7 May 17 01:35:02.759370 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 92 May 17 01:35:02.759431 kernel: pcieport 0005:00:01.0: Adding to iommu group 8 May 17 01:35:02.759489 kernel: pcieport 0005:00:01.0: AER: enabled with IRQ 93 May 17 01:35:02.759548 kernel: pcieport 0005:00:03.0: Adding to iommu group 9 May 17 01:35:02.759606 kernel: pcieport 0005:00:03.0: AER: enabled with IRQ 93 May 17 01:35:02.759665 kernel: pcieport 0005:00:05.0: Adding to iommu group 10 May 17 01:35:02.759723 kernel: pcieport 0005:00:05.0: AER: enabled with IRQ 93 May 17 01:35:02.759782 kernel: pcieport 0005:00:07.0: Adding to iommu group 11 May 17 01:35:02.759844 kernel: pcieport 0005:00:07.0: AER: enabled with IRQ 93 May 17 01:35:02.759903 kernel: pcieport 0003:00:01.0: Adding to iommu group 12 May 17 01:35:02.759961 kernel: pcieport 0003:00:01.0: AER: enabled with IRQ 94 May 17 01:35:02.760020 kernel: pcieport 0003:00:03.0: Adding to iommu group 13 May 17 01:35:02.760081 kernel: pcieport 0003:00:03.0: AER: enabled with IRQ 94 May 17 01:35:02.760143 kernel: pcieport 0003:00:05.0: Adding to iommu group 14 May 17 01:35:02.760201 kernel: pcieport 0003:00:05.0: AER: enabled with IRQ 94 May 17 01:35:02.760261 kernel: pcieport 000c:00:01.0: Adding to iommu group 15 May 17 01:35:02.760321 kernel: pcieport 000c:00:01.0: AER: enabled with IRQ 95 May 17 01:35:02.760381 kernel: pcieport 000c:00:02.0: Adding to iommu group 16 May 17 01:35:02.760439 kernel: pcieport 000c:00:02.0: AER: enabled with IRQ 95 May 17 01:35:02.760499 kernel: pcieport 000c:00:03.0: Adding to iommu group 17 May 17 01:35:02.760558 kernel: pcieport 000c:00:03.0: AER: enabled with IRQ 95 May 17 01:35:02.760617 kernel: pcieport 000c:00:04.0: Adding to iommu group 18 May 17 01:35:02.760677 kernel: pcieport 000c:00:04.0: AER: enabled with IRQ 95 May 17 01:35:02.760737 kernel: pcieport 0002:00:01.0: Adding to iommu group 19 May 17 01:35:02.760797 kernel: pcieport 0002:00:01.0: AER: enabled with IRQ 96 May 17 01:35:02.760857 kernel: pcieport 0002:00:03.0: Adding to iommu group 20 May 17 01:35:02.760916 kernel: pcieport 0002:00:03.0: AER: enabled with IRQ 96 May 17 01:35:02.760975 kernel: pcieport 0002:00:05.0: Adding to iommu group 21 May 17 01:35:02.761033 kernel: pcieport 0002:00:05.0: AER: enabled with IRQ 96 May 17 01:35:02.761096 kernel: pcieport 0002:00:07.0: Adding to iommu group 22 May 17 01:35:02.761155 kernel: pcieport 0002:00:07.0: AER: enabled with IRQ 96 May 17 01:35:02.761216 kernel: pcieport 0001:00:01.0: Adding to iommu group 23 May 17 01:35:02.761275 kernel: pcieport 0001:00:01.0: AER: enabled with IRQ 97 May 17 01:35:02.761335 kernel: pcieport 0001:00:02.0: Adding to iommu group 24 May 17 01:35:02.761393 kernel: pcieport 0001:00:02.0: AER: enabled with IRQ 97 May 17 01:35:02.761451 kernel: pcieport 0001:00:03.0: Adding to iommu group 25 May 17 01:35:02.761510 kernel: pcieport 0001:00:03.0: AER: enabled with IRQ 97 May 17 01:35:02.761569 kernel: pcieport 0001:00:04.0: Adding to iommu group 26 May 17 01:35:02.761627 kernel: pcieport 0001:00:04.0: AER: enabled with IRQ 97 May 17 01:35:02.761686 kernel: pcieport 0004:00:01.0: Adding to iommu group 27 May 17 01:35:02.761746 kernel: pcieport 0004:00:01.0: AER: enabled with IRQ 98 May 17 01:35:02.761805 kernel: pcieport 0004:00:03.0: Adding to iommu group 28 May 17 01:35:02.761864 kernel: pcieport 0004:00:03.0: AER: enabled with IRQ 98 May 17 01:35:02.761922 kernel: pcieport 0004:00:05.0: Adding to iommu group 29 May 17 01:35:02.761980 kernel: pcieport 0004:00:05.0: AER: enabled with IRQ 98 May 17 01:35:02.762042 kernel: pcieport 0004:01:00.0: Adding to iommu group 30 May 17 01:35:02.762056 kernel: clk: Disabling unused clocks May 17 01:35:02.762064 kernel: Freeing unused kernel memory: 36416K May 17 01:35:02.762072 kernel: Run /init as init process May 17 01:35:02.762080 kernel: with arguments: May 17 01:35:02.762087 kernel: /init May 17 01:35:02.762094 kernel: with environment: May 17 01:35:02.762101 kernel: HOME=/ May 17 01:35:02.762108 kernel: TERM=linux May 17 01:35:02.762115 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 01:35:02.762125 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 01:35:02.762136 systemd[1]: Detected architecture arm64. May 17 01:35:02.762144 systemd[1]: Running in initrd. May 17 01:35:02.762152 systemd[1]: No hostname configured, using default hostname. May 17 01:35:02.762159 systemd[1]: Hostname set to . May 17 01:35:02.762167 systemd[1]: Initializing machine ID from random generator. May 17 01:35:02.762174 systemd[1]: Queued start job for default target initrd.target. May 17 01:35:02.762182 systemd[1]: Started systemd-ask-password-console.path. May 17 01:35:02.762189 systemd[1]: Reached target cryptsetup.target. May 17 01:35:02.762198 systemd[1]: Reached target paths.target. May 17 01:35:02.762205 systemd[1]: Reached target slices.target. May 17 01:35:02.762213 systemd[1]: Reached target swap.target. May 17 01:35:02.762220 systemd[1]: Reached target timers.target. May 17 01:35:02.762227 systemd[1]: Listening on iscsid.socket. May 17 01:35:02.762235 systemd[1]: Listening on iscsiuio.socket. May 17 01:35:02.762243 systemd[1]: Listening on systemd-journald-audit.socket. May 17 01:35:02.762250 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 01:35:02.762259 systemd[1]: Listening on systemd-journald.socket. May 17 01:35:02.762267 systemd[1]: Listening on systemd-networkd.socket. May 17 01:35:02.762274 systemd[1]: Listening on systemd-udevd-control.socket. May 17 01:35:02.762282 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 01:35:02.762289 systemd[1]: Reached target sockets.target. May 17 01:35:02.762297 systemd[1]: Starting kmod-static-nodes.service... May 17 01:35:02.762304 systemd[1]: Finished network-cleanup.service. May 17 01:35:02.762312 systemd[1]: Starting systemd-fsck-usr.service... May 17 01:35:02.762319 systemd[1]: Starting systemd-journald.service... May 17 01:35:02.762328 systemd[1]: Starting systemd-modules-load.service... May 17 01:35:02.762335 systemd[1]: Starting systemd-resolved.service... May 17 01:35:02.762343 systemd[1]: Starting systemd-vconsole-setup.service... May 17 01:35:02.762351 systemd[1]: Finished kmod-static-nodes.service. May 17 01:35:02.762358 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 01:35:02.762369 systemd-journald[874]: Journal started May 17 01:35:02.762412 systemd-journald[874]: Runtime Journal (/run/log/journal/f36fb307983d4e52aa65178944432086) is 8.0M, max 4.0G, 3.9G free. May 17 01:35:02.703155 systemd-modules-load[875]: Inserted module 'overlay' May 17 01:35:02.834296 kernel: audit: type=1130 audit(1747445702.769:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.834324 systemd[1]: Started systemd-resolved.service. May 17 01:35:02.834352 kernel: Bridge firewalling registered May 17 01:35:02.834369 kernel: SCSI subsystem initialized May 17 01:35:02.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.750853 systemd-resolved[876]: Positive Trust Anchors: May 17 01:35:02.930291 kernel: audit: type=1130 audit(1747445702.838:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.930304 systemd[1]: Started systemd-journald.service. May 17 01:35:02.930316 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 01:35:02.930326 kernel: device-mapper: uevent: version 1.0.3 May 17 01:35:02.930336 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 01:35:02.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.750860 systemd-resolved[876]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:35:02.978968 kernel: audit: type=1130 audit(1747445702.935:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.750887 systemd-resolved[876]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 01:35:03.047918 kernel: audit: type=1130 audit(1747445702.983:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.755890 systemd-resolved[876]: Defaulting to hostname 'linux'. May 17 01:35:03.092846 kernel: audit: type=1130 audit(1747445703.052:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.771754 systemd-modules-load[875]: Inserted module 'br_netfilter' May 17 01:35:03.143225 kernel: audit: type=1130 audit(1747445703.097:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:02.919465 systemd-modules-load[875]: Inserted module 'dm_multipath' May 17 01:35:02.935431 systemd[1]: Finished systemd-fsck-usr.service. May 17 01:35:02.984041 systemd[1]: Finished systemd-modules-load.service. May 17 01:35:03.053140 systemd[1]: Finished systemd-vconsole-setup.service. May 17 01:35:03.098018 systemd[1]: Reached target nss-lookup.target. May 17 01:35:03.228821 kernel: audit: type=1130 audit(1747445703.190:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.149511 systemd[1]: Starting dracut-cmdline-ask.service... May 17 01:35:03.272335 kernel: audit: type=1130 audit(1747445703.233:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.160234 systemd[1]: Starting systemd-sysctl.service... May 17 01:35:03.319837 kernel: audit: type=1130 audit(1747445703.277:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.171150 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 01:35:03.330885 dracut-cmdline[895]: dracut-dracut-053 May 17 01:35:03.330885 dracut-cmdline[895]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=t May 17 01:35:03.330885 dracut-cmdline[895]: tyS1,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=packet flatcar.autologin verity.usrhash=2d48a3f066dcb37cd386b93b4921577cdf70daa76e7b097cf98da108968f8bb5 May 17 01:35:03.181705 systemd[1]: Finished systemd-sysctl.service. May 17 01:35:03.190282 systemd[1]: Finished dracut-cmdline-ask.service. May 17 01:35:03.233650 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 01:35:03.278619 systemd[1]: Starting dracut-cmdline.service... May 17 01:35:03.412058 kernel: Loading iSCSI transport class v2.0-870. May 17 01:35:03.434053 kernel: iscsi: registered transport (tcp) May 17 01:35:03.465034 kernel: iscsi: registered transport (qla4xxx) May 17 01:35:03.465059 kernel: QLogic iSCSI HBA Driver May 17 01:35:03.495859 systemd[1]: Finished dracut-cmdline.service. May 17 01:35:03.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:03.506136 systemd[1]: Starting dracut-pre-udev.service... May 17 01:35:03.561058 kernel: raid6: neonx8 gen() 13781 MB/s May 17 01:35:03.586055 kernel: raid6: neonx8 xor() 10855 MB/s May 17 01:35:03.611055 kernel: raid6: neonx4 gen() 13617 MB/s May 17 01:35:03.636054 kernel: raid6: neonx4 xor() 11369 MB/s May 17 01:35:03.661055 kernel: raid6: neonx2 gen() 12990 MB/s May 17 01:35:03.686055 kernel: raid6: neonx2 xor() 10688 MB/s May 17 01:35:03.711055 kernel: raid6: neonx1 gen() 10552 MB/s May 17 01:35:03.736055 kernel: raid6: neonx1 xor() 8809 MB/s May 17 01:35:03.760055 kernel: raid6: int64x8 gen() 6288 MB/s May 17 01:35:03.784054 kernel: raid6: int64x8 xor() 3554 MB/s May 17 01:35:03.808055 kernel: raid6: int64x4 gen() 7199 MB/s May 17 01:35:03.832055 kernel: raid6: int64x4 xor() 3873 MB/s May 17 01:35:03.856055 kernel: raid6: int64x2 gen() 6174 MB/s May 17 01:35:03.880054 kernel: raid6: int64x2 xor() 3333 MB/s May 17 01:35:03.904055 kernel: raid6: int64x1 gen() 5059 MB/s May 17 01:35:03.935472 kernel: raid6: int64x1 xor() 2656 MB/s May 17 01:35:03.935491 kernel: raid6: using algorithm neonx8 gen() 13781 MB/s May 17 01:35:03.935508 kernel: raid6: .... xor() 10855 MB/s, rmw enabled May 17 01:35:03.943514 kernel: raid6: using neon recovery algorithm May 17 01:35:03.975742 kernel: xor: measuring software checksum speed May 17 01:35:03.975761 kernel: 8regs : 17213 MB/sec May 17 01:35:03.982675 kernel: 32regs : 20717 MB/sec May 17 01:35:03.989450 kernel: arm64_neon : 27993 MB/sec May 17 01:35:03.989472 kernel: xor: using function: arm64_neon (27993 MB/sec) May 17 01:35:04.057054 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 17 01:35:04.065725 systemd[1]: Finished dracut-pre-udev.service. May 17 01:35:04.124648 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:35:04.124661 kernel: audit: type=1130 audit(1747445704.068:12): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.124671 kernel: audit: type=1334 audit(1747445704.068:13): prog-id=7 op=LOAD May 17 01:35:04.124680 kernel: audit: type=1334 audit(1747445704.069:14): prog-id=8 op=LOAD May 17 01:35:04.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.068000 audit: BPF prog-id=7 op=LOAD May 17 01:35:04.069000 audit: BPF prog-id=8 op=LOAD May 17 01:35:04.070106 systemd[1]: Starting systemd-udevd.service... May 17 01:35:04.136279 systemd-udevd[1071]: Using default interface naming scheme 'v252'. May 17 01:35:04.139650 systemd[1]: Started systemd-udevd.service. May 17 01:35:04.176501 kernel: audit: type=1130 audit(1747445704.143:15): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.146090 systemd[1]: Starting dracut-pre-trigger.service... May 17 01:35:04.184183 dracut-pre-trigger[1078]: rd.md=0: removing MD RAID activation May 17 01:35:04.193003 systemd[1]: Finished dracut-pre-trigger.service. May 17 01:35:04.233557 kernel: audit: type=1130 audit(1747445704.197:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.198518 systemd[1]: Starting systemd-udev-trigger.service... May 17 01:35:04.285890 systemd[1]: Finished systemd-udev-trigger.service. May 17 01:35:04.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.332053 kernel: audit: type=1130 audit(1747445704.296:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:04.352928 kernel: ACPI: bus type USB registered May 17 01:35:04.352958 kernel: mlx5_core 0001:01:00.0: Adding to iommu group 31 May 17 01:35:05.308492 kernel: usbcore: registered new interface driver usbfs May 17 01:35:05.308525 kernel: mlx5_core 0001:01:00.0: firmware version: 14.31.1014 May 17 01:35:05.308635 kernel: usbcore: registered new interface driver hub May 17 01:35:05.308645 kernel: mlx5_core 0001:01:00.0: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:35:05.308716 kernel: igb: Intel(R) Gigabit Ethernet Network Driver May 17 01:35:05.308725 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. May 17 01:35:05.308739 kernel: igb 0003:03:00.0: Adding to iommu group 32 May 17 01:35:05.308811 kernel: usbcore: registered new device driver usb May 17 01:35:05.308821 kernel: nvme 0005:03:00.0: Adding to iommu group 33 May 17 01:35:05.308892 kernel: xhci_hcd 0004:03:00.0: Adding to iommu group 34 May 17 01:35:05.308963 kernel: igb 0003:03:00.0: added PHC on eth0 May 17 01:35:05.309029 kernel: igb 0003:03:00.0: Intel(R) Gigabit Ethernet Network Connection May 17 01:35:05.309100 kernel: igb 0003:03:00.0: eth0: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:52:f2 May 17 01:35:05.309166 kernel: igb 0003:03:00.0: eth0: PBA No: 106300-000 May 17 01:35:05.309232 kernel: igb 0003:03:00.0: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 01:35:05.309296 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 01:35:05.309360 kernel: igb 0003:03:00.1: Adding to iommu group 35 May 17 01:35:05.309428 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 1 May 17 01:35:05.309491 kernel: nvme nvme0: pci function 0005:03:00.0 May 17 01:35:05.309573 kernel: xhci_hcd 0004:03:00.0: Zeroing 64bit base registers, expecting fault May 17 01:35:05.309639 kernel: nvme 0005:04:00.0: Adding to iommu group 36 May 17 01:35:05.309711 kernel: nvme nvme0: Shutdown timeout set to 8 seconds May 17 01:35:05.309777 kernel: nvme nvme0: 32/0/0 default/read/poll queues May 17 01:35:05.309840 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 01:35:05.309851 kernel: GPT:9289727 != 1875385007 May 17 01:35:05.309860 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 01:35:05.309869 kernel: GPT:9289727 != 1875385007 May 17 01:35:05.309877 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 01:35:05.309886 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:35:05.309897 kernel: mlx5_core 0001:01:00.0: Port module event: module 0, Cable plugged May 17 01:35:05.309966 kernel: xhci_hcd 0004:03:00.0: hcc params 0x014051cf hci version 0x100 quirks 0x0000001100000410 May 17 01:35:05.310030 kernel: xhci_hcd 0004:03:00.0: xHCI Host Controller May 17 01:35:05.310099 kernel: xhci_hcd 0004:03:00.0: new USB bus registered, assigned bus number 2 May 17 01:35:05.310163 kernel: xhci_hcd 0004:03:00.0: Host supports USB 3.0 SuperSpeed May 17 01:35:05.310228 kernel: hub 1-0:1.0: USB hub found May 17 01:35:05.310310 kernel: hub 1-0:1.0: 4 ports detected May 17 01:35:05.310382 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. May 17 01:35:05.394479 kernel: nvme nvme1: pci function 0005:04:00.0 May 17 01:35:05.394608 kernel: hub 2-0:1.0: USB hub found May 17 01:35:05.394691 kernel: nvme nvme1: Shutdown timeout set to 8 seconds May 17 01:35:05.394755 kernel: hub 2-0:1.0: 4 ports detected May 17 01:35:05.394828 kernel: igb 0003:03:00.1: added PHC on eth1 May 17 01:35:05.394907 kernel: igb 0003:03:00.1: Intel(R) Gigabit Ethernet Network Connection May 17 01:35:05.394976 kernel: nvme nvme1: 32/0/0 default/read/poll queues May 17 01:35:05.395044 kernel: igb 0003:03:00.1: eth1: (PCIe:5.0Gb/s:Width x2) 18:c0:4d:80:52:f3 May 17 01:35:05.395114 kernel: igb 0003:03:00.1: eth1: PBA No: 106300-000 May 17 01:35:05.395189 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (1120) May 17 01:35:05.395199 kernel: igb 0003:03:00.1: Using MSI-X interrupts. 8 rx queue(s), 8 tx queue(s) May 17 01:35:05.395264 kernel: igb 0003:03:00.1 eno2: renamed from eth1 May 17 01:35:05.395329 kernel: igb 0003:03:00.0 eno1: renamed from eth0 May 17 01:35:05.395396 kernel: mlx5_core 0001:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 01:35:05.395465 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:35:05.395474 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:35:05.395483 kernel: usb 1-3: new high-speed USB device number 2 using xhci_hcd May 17 01:35:05.395608 kernel: hub 1-3:1.0: USB hub found May 17 01:35:05.395691 kernel: hub 1-3:1.0: 4 ports detected May 17 01:35:05.395767 kernel: mlx5_core 0001:01:00.1: Adding to iommu group 37 May 17 01:35:06.091889 kernel: usb 2-3: new SuperSpeed USB device number 2 using xhci_hcd May 17 01:35:06.092089 kernel: mlx5_core 0001:01:00.1: firmware version: 14.31.1014 May 17 01:35:06.092164 kernel: mlx5_core 0001:01:00.1: 31.504 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x4 link at 0001:00:01.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 01:35:06.092228 kernel: hub 2-3:1.0: USB hub found May 17 01:35:06.092316 kernel: hub 2-3:1.0: 4 ports detected May 17 01:35:06.092396 kernel: mlx5_core 0001:01:00.1: Port module event: module 1, Cable plugged May 17 01:35:06.092485 kernel: mlx5_core 0001:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 01:35:06.092572 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 May 17 01:35:04.919513 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 01:35:06.108060 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: renamed from eth1 May 17 01:35:04.963442 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 01:35:06.128565 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: renamed from eth0 May 17 01:35:06.128662 disk-uuid[1172]: Primary Header is updated. May 17 01:35:06.128662 disk-uuid[1172]: Secondary Entries is updated. May 17 01:35:06.128662 disk-uuid[1172]: Secondary Header is updated. May 17 01:35:04.977921 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 01:35:06.148610 disk-uuid[1173]: The operation has completed successfully. May 17 01:35:04.998909 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 01:35:05.006631 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 01:35:05.013657 systemd[1]: Starting disk-uuid.service... May 17 01:35:06.209644 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 01:35:06.209721 systemd[1]: Finished disk-uuid.service. May 17 01:35:06.286613 kernel: audit: type=1130 audit(1747445706.218:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.286633 kernel: audit: type=1131 audit(1747445706.218:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.219838 systemd[1]: Starting verity-setup.service... May 17 01:35:06.307411 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 17 01:35:06.317485 systemd[1]: Found device dev-mapper-usr.device. May 17 01:35:06.328085 systemd[1]: Mounting sysusr-usr.mount... May 17 01:35:06.338685 systemd[1]: Finished verity-setup.service. May 17 01:35:06.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.397691 systemd[1]: Mounted sysusr-usr.mount. May 17 01:35:06.402981 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 01:35:06.407641 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 01:35:06.408900 systemd[1]: Starting ignition-setup.service... May 17 01:35:06.474011 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:35:06.474023 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:35:06.474032 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 01:35:06.474041 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:35:06.469740 systemd[1]: Starting parse-ip-for-networkd.service... May 17 01:35:06.482666 systemd[1]: Finished ignition-setup.service. May 17 01:35:06.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.494347 systemd[1]: Starting ignition-fetch-offline.service... May 17 01:35:06.561080 systemd[1]: Finished parse-ip-for-networkd.service. May 17 01:35:06.566371 ignition[1413]: Ignition 2.14.0 May 17 01:35:06.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.573000 audit: BPF prog-id=9 op=LOAD May 17 01:35:06.566378 ignition[1413]: Stage: fetch-offline May 17 01:35:06.575042 systemd[1]: Starting systemd-networkd.service... May 17 01:35:06.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.566446 ignition[1413]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:35:06.580162 unknown[1413]: fetched base config from "system" May 17 01:35:06.566465 ignition[1413]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:35:06.580169 unknown[1413]: fetched user config from "system" May 17 01:35:06.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.571076 ignition[1413]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:35:06.584086 systemd[1]: Finished ignition-fetch-offline.service. May 17 01:35:06.571192 ignition[1413]: parsed url from cmdline: "" May 17 01:35:06.605400 systemd-networkd[1546]: lo: Link UP May 17 01:35:06.571195 ignition[1413]: no config URL provided May 17 01:35:06.605403 systemd-networkd[1546]: lo: Gained carrier May 17 01:35:06.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.571200 ignition[1413]: reading system config file "/usr/lib/ignition/user.ign" May 17 01:35:06.605849 systemd-networkd[1546]: Enumeration completed May 17 01:35:06.571254 ignition[1413]: parsing config with SHA512: 44875212e2acdf07dc21ee30eb76f68b3e1a50cf407749ee4cd1cfe623f384f83557d2b15d1aa82f70aa409608d88ed3e294443e8528558f37c20f6597c994ad May 17 01:35:06.605942 systemd[1]: Started systemd-networkd.service. May 17 01:35:06.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.676290 iscsid[1564]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 01:35:06.676290 iscsid[1564]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 17 01:35:06.676290 iscsid[1564]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 01:35:06.676290 iscsid[1564]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 01:35:06.676290 iscsid[1564]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 01:35:06.676290 iscsid[1564]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 01:35:06.676290 iscsid[1564]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 01:35:06.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.580719 ignition[1413]: fetch-offline: fetch-offline passed May 17 01:35:06.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:06.606542 systemd-networkd[1546]: enP1p1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:35:06.580723 ignition[1413]: POST message to Packet Timeline May 17 01:35:06.612006 systemd[1]: Reached target network.target. May 17 01:35:06.580728 ignition[1413]: POST Status error: resource requires networking May 17 01:35:06.620116 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 17 01:35:06.580784 ignition[1413]: Ignition finished successfully May 17 01:35:06.621496 systemd[1]: Starting ignition-kargs.service... May 17 01:35:06.635823 ignition[1549]: Ignition 2.14.0 May 17 01:35:06.630929 systemd[1]: Starting iscsiuio.service... May 17 01:35:06.635830 ignition[1549]: Stage: kargs May 17 01:35:06.644927 systemd[1]: Started iscsiuio.service. May 17 01:35:06.635950 ignition[1549]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:35:06.652411 systemd[1]: Starting iscsid.service... May 17 01:35:06.635966 ignition[1549]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:35:06.665267 systemd[1]: Started iscsid.service. May 17 01:35:06.638357 ignition[1549]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:35:06.673603 systemd[1]: Starting dracut-initqueue.service... May 17 01:35:06.639338 ignition[1549]: kargs: kargs passed May 17 01:35:06.693517 systemd[1]: Finished dracut-initqueue.service. May 17 01:35:06.639343 ignition[1549]: POST message to Packet Timeline May 17 01:35:06.707904 systemd[1]: Reached target remote-fs-pre.target. May 17 01:35:06.639356 ignition[1549]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:35:06.726192 systemd[1]: Reached target remote-cryptsetup.target. May 17 01:35:06.644012 ignition[1549]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34222->[::1]:53: read: connection refused May 17 01:35:06.735729 systemd[1]: Reached target remote-fs.target. May 17 01:35:06.844956 ignition[1549]: GET https://metadata.packet.net/metadata: attempt #2 May 17 01:35:06.746872 systemd[1]: Starting dracut-pre-mount.service... May 17 01:35:06.846644 ignition[1549]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:44076->[::1]:53: read: connection refused May 17 01:35:06.761826 systemd[1]: Finished dracut-pre-mount.service. May 17 01:35:07.246765 ignition[1549]: GET https://metadata.packet.net/metadata: attempt #3 May 17 01:35:07.247241 ignition[1549]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:60019->[::1]:53: read: connection refused May 17 01:35:07.416059 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 01:35:07.419319 systemd-networkd[1546]: enP1p1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:35:07.442696 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enP1p1s0f1np1: link becomes ready May 17 01:35:08.047682 ignition[1549]: GET https://metadata.packet.net/metadata: attempt #4 May 17 01:35:08.048158 ignition[1549]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:35777->[::1]:53: read: connection refused May 17 01:35:08.362056 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 01:35:08.364669 systemd-networkd[1546]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:35:08.416942 systemd-networkd[1546]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 01:35:08.477828 systemd-networkd[1546]: enP1p1s0f1np1: Link UP May 17 01:35:08.483207 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enP1p1s0f0np0: link becomes ready May 17 01:35:08.477974 systemd-networkd[1546]: enP1p1s0f1np1: Gained carrier May 17 01:35:08.489164 systemd-networkd[1546]: enP1p1s0f0np0: Link UP May 17 01:35:08.489275 systemd-networkd[1546]: eno2: Link UP May 17 01:35:08.489379 systemd-networkd[1546]: eno1: Link UP May 17 01:35:08.489497 systemd-networkd[1546]: enP1p1s0f0np0: Gained carrier May 17 01:35:08.544095 systemd-networkd[1546]: enP1p1s0f0np0: DHCPv4 address 86.109.9.158/30, gateway 86.109.9.157 acquired from 145.40.76.140 May 17 01:35:09.649633 ignition[1549]: GET https://metadata.packet.net/metadata: attempt #5 May 17 01:35:09.650406 ignition[1549]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:54853->[::1]:53: read: connection refused May 17 01:35:10.152257 systemd-networkd[1546]: enP1p1s0f1np1: Gained IPv6LL May 17 01:35:10.344236 systemd-networkd[1546]: enP1p1s0f0np0: Gained IPv6LL May 17 01:35:12.851453 ignition[1549]: GET https://metadata.packet.net/metadata: attempt #6 May 17 01:35:13.592205 ignition[1549]: GET result: OK May 17 01:35:13.906586 ignition[1549]: Ignition finished successfully May 17 01:35:13.909253 systemd[1]: Finished ignition-kargs.service. May 17 01:35:13.965586 kernel: kauditd_printk_skb: 10 callbacks suppressed May 17 01:35:13.965605 kernel: audit: type=1130 audit(1747445713.912:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:13.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:13.922796 ignition[1585]: Ignition 2.14.0 May 17 01:35:13.913831 systemd[1]: Starting ignition-disks.service... May 17 01:35:13.922802 ignition[1585]: Stage: disks May 17 01:35:13.922893 ignition[1585]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:35:13.922908 ignition[1585]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:35:13.926918 ignition[1585]: no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:35:13.927909 ignition[1585]: disks: disks passed May 17 01:35:13.927914 ignition[1585]: POST message to Packet Timeline May 17 01:35:13.927928 ignition[1585]: GET https://metadata.packet.net/metadata: attempt #1 May 17 01:35:14.652184 ignition[1585]: GET result: OK May 17 01:35:15.433485 ignition[1585]: Ignition finished successfully May 17 01:35:15.435240 systemd[1]: Finished ignition-disks.service. May 17 01:35:15.477696 kernel: audit: type=1130 audit(1747445715.441:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.442140 systemd[1]: Reached target initrd-root-device.target. May 17 01:35:15.482063 systemd[1]: Reached target local-fs-pre.target. May 17 01:35:15.490848 systemd[1]: Reached target local-fs.target. May 17 01:35:15.499664 systemd[1]: Reached target sysinit.target. May 17 01:35:15.508420 systemd[1]: Reached target basic.target. May 17 01:35:15.518274 systemd[1]: Starting systemd-fsck-root.service... May 17 01:35:15.534520 systemd-fsck[1602]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 17 01:35:15.536781 systemd[1]: Finished systemd-fsck-root.service. May 17 01:35:15.600073 kernel: audit: type=1130 audit(1747445715.543:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.600100 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 01:35:15.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.547036 systemd[1]: Mounting sysroot.mount... May 17 01:35:15.606497 systemd[1]: Mounted sysroot.mount. May 17 01:35:15.613830 systemd[1]: Reached target initrd-root-fs.target. May 17 01:35:15.624125 systemd[1]: Mounting sysroot-usr.mount... May 17 01:35:15.633863 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 01:35:15.643126 systemd[1]: Starting flatcar-static-network.service... May 17 01:35:15.651082 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 01:35:15.651141 systemd[1]: Reached target ignition-diskful.target. May 17 01:35:15.662259 systemd[1]: Mounted sysroot-usr.mount. May 17 01:35:15.745941 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1617) May 17 01:35:15.745962 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:35:15.745972 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:35:15.745981 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 01:35:15.745990 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:35:15.746014 coreos-metadata[1612]: May 17 01:35:15.714 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:35:15.675812 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 01:35:15.768725 coreos-metadata[1610]: May 17 01:35:15.714 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:35:15.816568 kernel: audit: type=1130 audit(1747445715.773:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.684391 systemd[1]: Starting initrd-setup-root.service... May 17 01:35:15.753094 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 01:35:15.834803 initrd-setup-root[1640]: cut: /sysroot/etc/passwd: No such file or directory May 17 01:35:15.764262 systemd[1]: Finished initrd-setup-root.service. May 17 01:35:15.844744 initrd-setup-root[1648]: cut: /sysroot/etc/group: No such file or directory May 17 01:35:15.774549 systemd[1]: Starting ignition-mount.service... May 17 01:35:15.892827 kernel: audit: type=1130 audit(1747445715.854:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:15.892868 initrd-setup-root[1656]: cut: /sysroot/etc/shadow: No such file or directory May 17 01:35:15.897802 ignition[1695]: INFO : Ignition 2.14.0 May 17 01:35:15.897802 ignition[1695]: INFO : Stage: mount May 17 01:35:15.897802 ignition[1695]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:35:15.897802 ignition[1695]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:35:15.897802 ignition[1695]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:35:15.897802 ignition[1695]: INFO : mount: mount passed May 17 01:35:15.897802 ignition[1695]: INFO : POST message to Packet Timeline May 17 01:35:15.897802 ignition[1695]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:35:15.822526 systemd[1]: Starting sysroot-boot.service... May 17 01:35:15.956111 initrd-setup-root[1664]: cut: /sysroot/etc/gshadow: No such file or directory May 17 01:35:15.831348 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 01:35:15.831508 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 01:35:15.847665 systemd[1]: Finished sysroot-boot.service. May 17 01:35:16.471006 coreos-metadata[1610]: May 17 01:35:16.470 INFO Fetch successful May 17 01:35:16.488308 coreos-metadata[1612]: May 17 01:35:16.488 INFO Fetch successful May 17 01:35:16.518836 coreos-metadata[1610]: May 17 01:35:16.518 INFO wrote hostname ci-3510.3.7-n-8226148b53 to /sysroot/etc/hostname May 17 01:35:16.519621 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 01:35:16.578019 kernel: audit: type=1130 audit(1747445716.533:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.578065 ignition[1695]: INFO : GET result: OK May 17 01:35:16.658264 kernel: audit: type=1130 audit(1747445716.582:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.658276 kernel: audit: type=1131 audit(1747445716.582:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.574728 systemd[1]: flatcar-static-network.service: Deactivated successfully. May 17 01:35:16.574797 systemd[1]: Finished flatcar-static-network.service. May 17 01:35:16.898948 ignition[1695]: INFO : Ignition finished successfully May 17 01:35:16.900588 systemd[1]: Finished ignition-mount.service. May 17 01:35:16.949154 kernel: audit: type=1130 audit(1747445716.909:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:16.910480 systemd[1]: Starting ignition-files.service... May 17 01:35:16.958498 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 01:35:17.028560 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/nvme0n1p6 scanned by mount (1711) May 17 01:35:17.028577 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm May 17 01:35:17.028586 kernel: BTRFS info (device nvme0n1p6): using free space tree May 17 01:35:17.028596 kernel: BTRFS info (device nvme0n1p6): has skinny extents May 17 01:35:17.028604 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations May 17 01:35:16.969888 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 01:35:17.044772 ignition[1730]: INFO : Ignition 2.14.0 May 17 01:35:17.044772 ignition[1730]: INFO : Stage: files May 17 01:35:17.054754 ignition[1730]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:35:17.054754 ignition[1730]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:35:17.054754 ignition[1730]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:35:17.054754 ignition[1730]: DEBUG : files: compiled without relabeling support, skipping May 17 01:35:17.054754 ignition[1730]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 01:35:17.054754 ignition[1730]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 01:35:17.054754 ignition[1730]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 01:35:17.054754 ignition[1730]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 01:35:17.054754 ignition[1730]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 01:35:17.054754 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:35:17.054754 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 01:35:17.054754 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 01:35:17.054754 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 17 01:35:17.051713 unknown[1730]: wrote ssh authorized keys file for user: core May 17 01:35:17.183905 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 17 01:35:17.424127 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition May 17 01:35:17.435496 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3227612279" May 17 01:35:17.630083 ignition[1730]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3227612279": device or resource busy May 17 01:35:17.630083 ignition[1730]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3227612279", trying btrfs: device or resource busy May 17 01:35:17.630083 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3227612279" May 17 01:35:17.630083 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3227612279" May 17 01:35:17.630083 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3227612279" May 17 01:35:17.630083 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3227612279" May 17 01:35:17.630083 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" May 17 01:35:17.630083 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:35:17.630083 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 17 01:35:18.200206 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK May 17 01:35:18.635754 ignition[1730]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 17 01:35:18.635754 ignition[1730]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" May 17 01:35:18.635754 ignition[1730]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" May 17 01:35:18.635754 ignition[1730]: INFO : files: op(11): [started] processing unit "packet-phone-home.service" May 17 01:35:18.635754 ignition[1730]: INFO : files: op(11): [finished] processing unit "packet-phone-home.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(12): [started] processing unit "containerd.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(12): [finished] processing unit "containerd.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(14): [started] processing unit "prepare-helm.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 01:35:18.698435 ignition[1730]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " May 17 01:35:18.698435 ignition[1730]: INFO : files: op(17): [started] setting preset to enabled for "packet-phone-home.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(17): [finished] setting preset to enabled for "packet-phone-home.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(18): [started] setting preset to enabled for "prepare-helm.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-helm.service" May 17 01:35:18.698435 ignition[1730]: INFO : files: createResultFile: createFiles: op(19): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 01:35:18.698435 ignition[1730]: INFO : files: createResultFile: createFiles: op(19): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 01:35:18.698435 ignition[1730]: INFO : files: files passed May 17 01:35:18.698435 ignition[1730]: INFO : POST message to Packet Timeline May 17 01:35:18.698435 ignition[1730]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:35:19.355456 ignition[1730]: INFO : GET result: OK May 17 01:35:19.757421 ignition[1730]: INFO : Ignition finished successfully May 17 01:35:19.759894 systemd[1]: Finished ignition-files.service. May 17 01:35:19.807338 kernel: audit: type=1130 audit(1747445719.763:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.766811 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 01:35:19.813242 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 01:35:19.836130 initrd-setup-root-after-ignition[1766]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 01:35:19.890736 kernel: audit: type=1130 audit(1747445719.841:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.814741 systemd[1]: Starting ignition-quench.service... May 17 01:35:19.973411 kernel: audit: type=1130 audit(1747445719.896:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.973424 kernel: audit: type=1131 audit(1747445719.896:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:19.825539 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 01:35:19.842119 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 01:35:19.842183 systemd[1]: Finished ignition-quench.service. May 17 01:35:19.896370 systemd[1]: Reached target ignition-complete.target. May 17 01:35:19.979989 systemd[1]: Starting initrd-parse-etc.service... May 17 01:35:20.086839 kernel: audit: type=1130 audit(1747445720.011:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.086856 kernel: audit: type=1131 audit(1747445720.011:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.003232 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 01:35:20.003447 systemd[1]: Finished initrd-parse-etc.service. May 17 01:35:20.011546 systemd[1]: Reached target initrd-fs.target. May 17 01:35:20.092120 systemd[1]: Reached target initrd.target. May 17 01:35:20.102772 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 01:35:20.174940 kernel: audit: type=1130 audit(1747445720.129:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.104349 systemd[1]: Starting dracut-pre-pivot.service... May 17 01:35:20.124289 systemd[1]: Finished dracut-pre-pivot.service. May 17 01:35:20.131559 systemd[1]: Starting initrd-cleanup.service... May 17 01:35:20.186240 systemd[1]: Stopped target nss-lookup.target. May 17 01:35:20.194471 systemd[1]: Stopped target remote-cryptsetup.target. May 17 01:35:20.203745 systemd[1]: Stopped target timers.target. May 17 01:35:20.259895 kernel: audit: type=1131 audit(1747445720.221:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.221000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.212732 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 01:35:20.212823 systemd[1]: Stopped dracut-pre-pivot.service. May 17 01:35:20.221989 systemd[1]: Stopped target initrd.target. May 17 01:35:20.264590 systemd[1]: Stopped target basic.target. May 17 01:35:20.274002 systemd[1]: Stopped target ignition-complete.target. May 17 01:35:20.283230 systemd[1]: Stopped target ignition-diskful.target. May 17 01:35:20.292184 systemd[1]: Stopped target initrd-root-device.target. May 17 01:35:20.300909 systemd[1]: Stopped target remote-fs.target. May 17 01:35:20.309616 systemd[1]: Stopped target remote-fs-pre.target. May 17 01:35:20.318234 systemd[1]: Stopped target sysinit.target. May 17 01:35:20.326849 systemd[1]: Stopped target local-fs.target. May 17 01:35:20.335692 systemd[1]: Stopped target local-fs-pre.target. May 17 01:35:20.398815 kernel: audit: type=1131 audit(1747445720.361:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.344370 systemd[1]: Stopped target swap.target. May 17 01:35:20.352710 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 01:35:20.455116 kernel: audit: type=1131 audit(1747445720.412:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.352795 systemd[1]: Stopped dracut-pre-mount.service. May 17 01:35:20.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.361285 systemd[1]: Stopped target cryptsetup.target. May 17 01:35:20.403399 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 01:35:20.403478 systemd[1]: Stopped dracut-initqueue.service. May 17 01:35:20.412509 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 01:35:20.412588 systemd[1]: Stopped ignition-fetch-offline.service. May 17 01:35:20.459675 systemd[1]: Stopped target paths.target. May 17 01:35:20.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.468522 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 01:35:20.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.470063 systemd[1]: Stopped systemd-ask-password-console.path. May 17 01:35:20.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.477442 systemd[1]: Stopped target slices.target. May 17 01:35:20.546858 ignition[1781]: INFO : Ignition 2.14.0 May 17 01:35:20.546858 ignition[1781]: INFO : Stage: umount May 17 01:35:20.546858 ignition[1781]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 01:35:20.546858 ignition[1781]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 May 17 01:35:20.546858 ignition[1781]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" May 17 01:35:20.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.486453 systemd[1]: Stopped target sockets.target. May 17 01:35:20.617192 ignition[1781]: INFO : umount: umount passed May 17 01:35:20.617192 ignition[1781]: INFO : POST message to Packet Timeline May 17 01:35:20.617192 ignition[1781]: INFO : GET https://metadata.packet.net/metadata: attempt #1 May 17 01:35:20.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.631000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:20.495443 systemd[1]: iscsid.socket: Deactivated successfully. May 17 01:35:20.495526 systemd[1]: Closed iscsid.socket. May 17 01:35:20.504600 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 01:35:20.504679 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 01:35:20.513808 systemd[1]: ignition-files.service: Deactivated successfully. May 17 01:35:20.513882 systemd[1]: Stopped ignition-files.service. May 17 01:35:20.522962 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 01:35:20.523038 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 01:35:20.534174 systemd[1]: Stopping ignition-mount.service... May 17 01:35:20.542414 systemd[1]: Stopping iscsiuio.service... May 17 01:35:20.553685 systemd[1]: Stopping sysroot-boot.service... May 17 01:35:20.561592 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 01:35:20.561737 systemd[1]: Stopped systemd-udev-trigger.service. May 17 01:35:20.571029 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 01:35:20.571113 systemd[1]: Stopped dracut-pre-trigger.service. May 17 01:35:20.587632 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 01:35:20.588288 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 01:35:20.588427 systemd[1]: Stopped iscsiuio.service. May 17 01:35:20.598811 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 01:35:20.598891 systemd[1]: Closed iscsiuio.socket. May 17 01:35:20.612670 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 01:35:20.612736 systemd[1]: Finished initrd-cleanup.service. May 17 01:35:20.621993 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 01:35:20.622060 systemd[1]: Stopped sysroot-boot.service. May 17 01:35:21.328483 ignition[1781]: INFO : GET result: OK May 17 01:35:21.835783 ignition[1781]: INFO : Ignition finished successfully May 17 01:35:21.837662 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 01:35:21.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.837763 systemd[1]: Stopped ignition-mount.service. May 17 01:35:21.844368 systemd[1]: Stopped target network.target. May 17 01:35:21.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.852269 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 01:35:21.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.852310 systemd[1]: Stopped ignition-disks.service. May 17 01:35:21.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.860259 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 01:35:21.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.860287 systemd[1]: Stopped ignition-kargs.service. May 17 01:35:21.868255 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 01:35:21.868302 systemd[1]: Stopped ignition-setup.service. May 17 01:35:21.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.912000 audit: BPF prog-id=6 op=UNLOAD May 17 01:35:21.876458 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 01:35:21.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.876487 systemd[1]: Stopped initrd-setup-root.service. May 17 01:35:21.884561 systemd[1]: Stopping systemd-networkd.service... May 17 01:35:21.890065 systemd-networkd[1546]: enP1p1s0f0np0: DHCPv6 lease lost May 17 01:35:21.942000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.892680 systemd[1]: Stopping systemd-resolved.service... May 17 01:35:21.951000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.899177 systemd-networkd[1546]: enP1p1s0f1np1: DHCPv6 lease lost May 17 01:35:21.955000 audit: BPF prog-id=9 op=UNLOAD May 17 01:35:21.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.901472 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 01:35:21.901547 systemd[1]: Stopped systemd-resolved.service. May 17 01:35:21.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.909518 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 01:35:21.909593 systemd[1]: Stopped systemd-networkd.service. May 17 01:35:21.917371 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 01:35:22.006000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.917396 systemd[1]: Closed systemd-networkd.socket. May 17 01:35:22.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.926361 systemd[1]: Stopping network-cleanup.service... May 17 01:35:22.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.933931 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 01:35:21.934015 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 01:35:22.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.942789 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 01:35:22.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.942875 systemd[1]: Stopped systemd-sysctl.service. May 17 01:35:22.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.951533 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 01:35:22.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:22.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:21.951564 systemd[1]: Stopped systemd-modules-load.service. May 17 01:35:21.960306 systemd[1]: Stopping systemd-udevd.service... May 17 01:35:21.970958 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 01:35:21.971690 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 01:35:21.971792 systemd[1]: Stopped systemd-udevd.service. May 17 01:35:21.981101 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 01:35:21.981240 systemd[1]: Closed systemd-udevd-control.socket. May 17 01:35:21.987812 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 01:35:21.987847 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 01:35:21.997149 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 01:35:21.997179 systemd[1]: Stopped dracut-pre-udev.service. May 17 01:35:22.006419 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 01:35:22.006450 systemd[1]: Stopped dracut-cmdline.service. May 17 01:35:22.015829 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 01:35:22.015859 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 01:35:22.026403 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 01:35:22.034537 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 01:35:22.034621 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 01:35:22.044332 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 01:35:22.044438 systemd[1]: Stopped kmod-static-nodes.service. May 17 01:35:22.053257 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 01:35:22.053302 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 01:35:22.064344 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 01:35:22.064958 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 01:35:22.065023 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 01:35:22.717601 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 01:35:22.717687 systemd[1]: Stopped network-cleanup.service. May 17 01:35:22.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:22.726875 systemd[1]: Reached target initrd-switch-root.target. May 17 01:35:22.737025 systemd[1]: Starting initrd-switch-root.service... May 17 01:35:22.747955 systemd[1]: Switching root. May 17 01:35:22.751000 audit: BPF prog-id=8 op=UNLOAD May 17 01:35:22.751000 audit: BPF prog-id=7 op=UNLOAD May 17 01:35:22.752000 audit: BPF prog-id=5 op=UNLOAD May 17 01:35:22.752000 audit: BPF prog-id=4 op=UNLOAD May 17 01:35:22.752000 audit: BPF prog-id=3 op=UNLOAD May 17 01:35:22.781528 iscsid[1564]: iscsid shutting down. May 17 01:35:22.781611 systemd-journald[874]: Journal stopped May 17 01:35:25.836820 systemd-journald[874]: Received SIGTERM from PID 1 (systemd). May 17 01:35:25.836844 kernel: SELinux: Class mctp_socket not defined in policy. May 17 01:35:25.836855 kernel: SELinux: Class anon_inode not defined in policy. May 17 01:35:25.836864 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 01:35:25.836872 kernel: SELinux: policy capability network_peer_controls=1 May 17 01:35:25.836880 kernel: SELinux: policy capability open_perms=1 May 17 01:35:25.836888 kernel: SELinux: policy capability extended_socket_class=1 May 17 01:35:25.836898 kernel: SELinux: policy capability always_check_network=0 May 17 01:35:25.836906 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 01:35:25.836914 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 01:35:25.836922 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 01:35:25.836930 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 01:35:25.836938 systemd[1]: Successfully loaded SELinux policy in 145.406ms. May 17 01:35:25.836948 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.304ms. May 17 01:35:25.836960 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 01:35:25.836970 systemd[1]: Detected architecture arm64. May 17 01:35:25.836979 systemd[1]: Detected first boot. May 17 01:35:25.836988 systemd[1]: Hostname set to . May 17 01:35:25.836997 systemd[1]: Initializing machine ID from random generator. May 17 01:35:25.837007 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 01:35:25.837016 systemd[1]: Populated /etc with preset unit settings. May 17 01:35:25.837025 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:35:25.837035 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:35:25.837047 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:35:25.837057 systemd[1]: Queued start job for default target multi-user.target. May 17 01:35:25.837066 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device. May 17 01:35:25.837077 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 01:35:25.837087 systemd[1]: Created slice system-addon\x2drun.slice. May 17 01:35:25.837097 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. May 17 01:35:25.837106 systemd[1]: Created slice system-getty.slice. May 17 01:35:25.837115 systemd[1]: Created slice system-modprobe.slice. May 17 01:35:25.837125 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 01:35:25.837134 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 01:35:25.837144 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 01:35:25.837154 systemd[1]: Created slice user.slice. May 17 01:35:25.837163 systemd[1]: Started systemd-ask-password-console.path. May 17 01:35:25.837174 systemd[1]: Started systemd-ask-password-wall.path. May 17 01:35:25.837184 systemd[1]: Set up automount boot.automount. May 17 01:35:25.837193 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 01:35:25.837202 systemd[1]: Reached target integritysetup.target. May 17 01:35:25.837213 systemd[1]: Reached target remote-cryptsetup.target. May 17 01:35:25.837222 systemd[1]: Reached target remote-fs.target. May 17 01:35:25.837231 systemd[1]: Reached target slices.target. May 17 01:35:25.837242 systemd[1]: Reached target swap.target. May 17 01:35:25.837252 systemd[1]: Reached target torcx.target. May 17 01:35:25.837261 systemd[1]: Reached target veritysetup.target. May 17 01:35:25.837270 systemd[1]: Listening on systemd-coredump.socket. May 17 01:35:25.837279 systemd[1]: Listening on systemd-initctl.socket. May 17 01:35:25.837288 kernel: kauditd_printk_skb: 49 callbacks suppressed May 17 01:35:25.837297 kernel: audit: type=1400 audit(1747445725.313:91): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 01:35:25.837308 systemd[1]: Listening on systemd-journald-audit.socket. May 17 01:35:25.837317 kernel: audit: type=1335 audit(1747445725.313:92): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 01:35:25.837326 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 01:35:25.837336 systemd[1]: Listening on systemd-journald.socket. May 17 01:35:25.837345 systemd[1]: Listening on systemd-networkd.socket. May 17 01:35:25.837354 systemd[1]: Listening on systemd-udevd-control.socket. May 17 01:35:25.837365 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 01:35:25.837375 systemd[1]: Listening on systemd-userdbd.socket. May 17 01:35:25.837384 systemd[1]: Mounting dev-hugepages.mount... May 17 01:35:25.837394 systemd[1]: Mounting dev-mqueue.mount... May 17 01:35:25.837403 systemd[1]: Mounting media.mount... May 17 01:35:25.837413 systemd[1]: Mounting sys-kernel-debug.mount... May 17 01:35:25.837422 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 01:35:25.837433 systemd[1]: Mounting tmp.mount... May 17 01:35:25.837442 systemd[1]: Starting flatcar-tmpfiles.service... May 17 01:35:25.837451 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:35:25.837461 systemd[1]: Starting kmod-static-nodes.service... May 17 01:35:25.837470 systemd[1]: Starting modprobe@configfs.service... May 17 01:35:25.837479 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:35:25.837488 systemd[1]: Starting modprobe@drm.service... May 17 01:35:25.837498 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:35:25.837507 systemd[1]: Starting modprobe@fuse.service... May 17 01:35:25.837517 kernel: fuse: init (API version 7.34) May 17 01:35:25.837526 systemd[1]: Starting modprobe@loop.service... May 17 01:35:25.837536 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 01:35:25.837545 kernel: loop: module loaded May 17 01:35:25.837554 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 01:35:25.837564 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 17 01:35:25.837575 systemd[1]: Starting systemd-journald.service... May 17 01:35:25.837584 systemd[1]: Starting systemd-modules-load.service... May 17 01:35:25.837595 systemd[1]: Starting systemd-network-generator.service... May 17 01:35:25.837604 kernel: audit: type=1305 audit(1747445725.832:93): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 01:35:25.837616 systemd-journald[1997]: Journal started May 17 01:35:25.837653 systemd-journald[1997]: Runtime Journal (/run/log/journal/e0a4cde71e00407abcf3a6e2aab26714) is 8.0M, max 4.0G, 3.9G free. May 17 01:35:25.313000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 01:35:25.313000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 17 01:35:25.832000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 01:35:25.832000 audit[1997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffc225d40 a2=4000 a3=1 items=0 ppid=1 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:25.915345 kernel: audit: type=1300 audit(1747445725.832:93): arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffffc225d40 a2=4000 a3=1 items=0 ppid=1 pid=1997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:25.915361 kernel: audit: type=1327 audit(1747445725.832:93): proctitle="/usr/lib/systemd/systemd-journald" May 17 01:35:25.832000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 01:35:25.951060 systemd[1]: Starting systemd-remount-fs.service... May 17 01:35:25.968061 systemd[1]: Starting systemd-udev-trigger.service... May 17 01:35:25.984060 systemd[1]: Started systemd-journald.service. May 17 01:35:25.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:25.989211 systemd[1]: Mounted dev-hugepages.mount. May 17 01:35:26.025050 kernel: audit: type=1130 audit(1747445725.988:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.029866 systemd[1]: Mounted dev-mqueue.mount. May 17 01:35:26.034771 systemd[1]: Mounted media.mount. May 17 01:35:26.039590 systemd[1]: Mounted sys-kernel-debug.mount. May 17 01:35:26.044363 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 01:35:26.049108 systemd[1]: Mounted tmp.mount. May 17 01:35:26.054080 systemd[1]: Finished flatcar-tmpfiles.service. May 17 01:35:26.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.059301 systemd[1]: Finished kmod-static-nodes.service. May 17 01:35:26.095049 kernel: audit: type=1130 audit(1747445726.058:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.099608 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 01:35:26.099749 systemd[1]: Finished modprobe@configfs.service. May 17 01:35:26.136049 kernel: audit: type=1130 audit(1747445726.099:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.140248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:35:26.140384 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:35:26.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.177047 kernel: audit: type=1130 audit(1747445726.139:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.177060 kernel: audit: type=1131 audit(1747445726.139:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.217000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.217000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.218200 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:35:26.218337 systemd[1]: Finished modprobe@drm.service. May 17 01:35:26.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.223247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:35:26.223378 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:35:26.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.228326 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 01:35:26.228462 systemd[1]: Finished modprobe@fuse.service. May 17 01:35:26.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.232000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.233320 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:35:26.233460 systemd[1]: Finished modprobe@loop.service. May 17 01:35:26.238000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.238435 systemd[1]: Finished systemd-modules-load.service. May 17 01:35:26.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.243010 systemd[1]: Finished systemd-network-generator.service. May 17 01:35:26.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.247627 systemd[1]: Finished systemd-remount-fs.service. May 17 01:35:26.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.252218 systemd[1]: Finished systemd-udev-trigger.service. May 17 01:35:26.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.256721 systemd[1]: Reached target network-pre.target. May 17 01:35:26.262074 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 01:35:26.267636 systemd[1]: Mounting sys-kernel-config.mount... May 17 01:35:26.272027 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 01:35:26.274072 systemd[1]: Starting systemd-hwdb-update.service... May 17 01:35:26.279392 systemd[1]: Starting systemd-journal-flush.service... May 17 01:35:26.283314 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:35:26.284503 systemd[1]: Starting systemd-random-seed.service... May 17 01:35:26.285556 systemd-journald[1997]: Time spent on flushing to /var/log/journal/e0a4cde71e00407abcf3a6e2aab26714 is 23.953ms for 2501 entries. May 17 01:35:26.285556 systemd-journald[1997]: System Journal (/var/log/journal/e0a4cde71e00407abcf3a6e2aab26714) is 8.0M, max 195.6M, 187.6M free. May 17 01:35:26.314111 systemd-journald[1997]: Received client request to flush runtime journal. May 17 01:35:26.300329 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:35:26.301497 systemd[1]: Starting systemd-sysctl.service... May 17 01:35:26.306719 systemd[1]: Starting systemd-sysusers.service... May 17 01:35:26.311920 systemd[1]: Starting systemd-udev-settle.service... May 17 01:35:26.317891 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 01:35:26.322063 systemd[1]: Mounted sys-kernel-config.mount. May 17 01:35:26.326323 systemd[1]: Finished systemd-journal-flush.service. May 17 01:35:26.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.330520 systemd[1]: Finished systemd-random-seed.service. May 17 01:35:26.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.334645 systemd[1]: Finished systemd-sysctl.service. May 17 01:35:26.338000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.338483 systemd[1]: Finished systemd-sysusers.service. May 17 01:35:26.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.343634 systemd[1]: Reached target first-boot-complete.target. May 17 01:35:26.348741 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 01:35:26.353800 udevadm[2022]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 01:35:26.365460 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 01:35:26.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.611056 systemd[1]: Finished systemd-hwdb-update.service. May 17 01:35:26.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.615742 systemd[1]: Starting systemd-udevd.service... May 17 01:35:26.634676 systemd-udevd[2031]: Using default interface naming scheme 'v252'. May 17 01:35:26.647531 systemd[1]: Started systemd-udevd.service. May 17 01:35:26.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.652980 systemd[1]: Starting systemd-networkd.service... May 17 01:35:26.658880 systemd[1]: Starting systemd-userdbd.service... May 17 01:35:26.665114 systemd[1]: Found device dev-ttyAMA0.device. May 17 01:35:26.691652 systemd[1]: Started systemd-userdbd.service. May 17 01:35:26.693056 kernel: IPMI message handler: version 39.2 May 17 01:35:26.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.704051 kernel: ipmi device interface May 17 01:35:26.697000 audit[2083]: AVC avc: denied { confidentiality } for pid=2083 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 01:35:26.704735 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 01:35:26.697000 audit[2083]: SYSCALL arch=c00000b7 syscall=105 success=yes exit=0 a0=ffff7d403010 a1=e4424 a2=ffff7f3824b0 a3=aaaae0ad8010 items=312 ppid=2031 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:26.697000 audit: CWD cwd="/" May 17 01:35:26.697000 audit: PATH item=0 name=(null) inode=40 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=1 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=2 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=3 name=(null) inode=24614 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=4 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=5 name=(null) inode=24615 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=6 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=7 name=(null) inode=24616 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=8 name=(null) inode=24616 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=9 name=(null) inode=24617 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=10 name=(null) inode=24616 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=11 name=(null) inode=24618 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=12 name=(null) inode=24616 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=13 name=(null) inode=24619 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=14 name=(null) inode=24616 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=15 name=(null) inode=24620 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=16 name=(null) inode=24616 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=17 name=(null) inode=24621 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=18 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=19 name=(null) inode=24622 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=20 name=(null) inode=24622 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=21 name=(null) inode=24623 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=22 name=(null) inode=24622 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=23 name=(null) inode=24624 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=24 name=(null) inode=24622 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=25 name=(null) inode=24625 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=26 name=(null) inode=24622 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=27 name=(null) inode=24626 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=28 name=(null) inode=24622 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=29 name=(null) inode=24627 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=30 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=31 name=(null) inode=24628 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=32 name=(null) inode=24628 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=33 name=(null) inode=24629 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=34 name=(null) inode=24628 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=35 name=(null) inode=24630 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=36 name=(null) inode=24628 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=37 name=(null) inode=24631 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=38 name=(null) inode=24628 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=39 name=(null) inode=24632 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=40 name=(null) inode=24628 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=41 name=(null) inode=24633 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=42 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=43 name=(null) inode=24634 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=44 name=(null) inode=24634 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=45 name=(null) inode=24635 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=46 name=(null) inode=24634 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=47 name=(null) inode=24636 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=48 name=(null) inode=24634 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=49 name=(null) inode=24637 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=50 name=(null) inode=24634 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=51 name=(null) inode=24638 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=52 name=(null) inode=24634 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=53 name=(null) inode=24639 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=54 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=55 name=(null) inode=24640 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=56 name=(null) inode=24640 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=57 name=(null) inode=24641 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=58 name=(null) inode=24640 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=59 name=(null) inode=24642 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=60 name=(null) inode=24640 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=61 name=(null) inode=24643 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=62 name=(null) inode=24640 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=63 name=(null) inode=24644 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=64 name=(null) inode=24640 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=65 name=(null) inode=24645 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=66 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=67 name=(null) inode=24646 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=68 name=(null) inode=24646 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=69 name=(null) inode=24647 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=70 name=(null) inode=24646 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=71 name=(null) inode=24648 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=72 name=(null) inode=24646 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=73 name=(null) inode=24649 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=74 name=(null) inode=24646 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=75 name=(null) inode=24650 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=76 name=(null) inode=24646 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=77 name=(null) inode=24651 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=78 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=79 name=(null) inode=24652 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=80 name=(null) inode=24652 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=81 name=(null) inode=24653 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=82 name=(null) inode=24652 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=83 name=(null) inode=24654 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=84 name=(null) inode=24652 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=85 name=(null) inode=24655 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=86 name=(null) inode=24652 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=87 name=(null) inode=24656 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=88 name=(null) inode=24652 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=89 name=(null) inode=24657 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=90 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=91 name=(null) inode=24658 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=92 name=(null) inode=24658 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=93 name=(null) inode=24659 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=94 name=(null) inode=24658 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=95 name=(null) inode=24660 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=96 name=(null) inode=24658 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=97 name=(null) inode=24661 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=98 name=(null) inode=24658 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=99 name=(null) inode=24662 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=100 name=(null) inode=24658 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=101 name=(null) inode=24663 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=102 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=103 name=(null) inode=24664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=104 name=(null) inode=24664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=105 name=(null) inode=24665 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=106 name=(null) inode=24664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=107 name=(null) inode=24666 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=108 name=(null) inode=24664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=109 name=(null) inode=24667 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=110 name=(null) inode=24664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=111 name=(null) inode=24668 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=112 name=(null) inode=24664 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=113 name=(null) inode=24669 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=114 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=115 name=(null) inode=24670 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=116 name=(null) inode=24670 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=117 name=(null) inode=24671 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=118 name=(null) inode=24670 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=119 name=(null) inode=24672 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=120 name=(null) inode=24670 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=121 name=(null) inode=24673 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=122 name=(null) inode=24670 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=123 name=(null) inode=24674 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=124 name=(null) inode=24670 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=125 name=(null) inode=24675 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=126 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=127 name=(null) inode=24676 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=128 name=(null) inode=24676 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=129 name=(null) inode=24677 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=130 name=(null) inode=24676 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=131 name=(null) inode=24678 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=132 name=(null) inode=24676 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=133 name=(null) inode=24679 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=134 name=(null) inode=24676 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=135 name=(null) inode=24680 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=136 name=(null) inode=24676 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=137 name=(null) inode=24681 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=138 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=139 name=(null) inode=24682 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=140 name=(null) inode=24682 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=141 name=(null) inode=24683 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=142 name=(null) inode=24682 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=143 name=(null) inode=24684 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=144 name=(null) inode=24682 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=145 name=(null) inode=24685 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=146 name=(null) inode=24682 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=147 name=(null) inode=24686 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=148 name=(null) inode=24682 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=149 name=(null) inode=24687 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=150 name=(null) inode=24613 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=151 name=(null) inode=24688 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=152 name=(null) inode=24688 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=153 name=(null) inode=24689 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=154 name=(null) inode=24688 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=155 name=(null) inode=24690 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=156 name=(null) inode=24688 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=157 name=(null) inode=24691 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=158 name=(null) inode=24688 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=159 name=(null) inode=24692 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=160 name=(null) inode=24688 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=161 name=(null) inode=24693 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=162 name=(null) inode=40 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=163 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=164 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=165 name=(null) inode=24695 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=166 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=167 name=(null) inode=24696 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=168 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=169 name=(null) inode=24697 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=170 name=(null) inode=24697 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=171 name=(null) inode=24698 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=172 name=(null) inode=24697 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=173 name=(null) inode=24699 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=174 name=(null) inode=24697 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=175 name=(null) inode=24700 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=176 name=(null) inode=24697 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=177 name=(null) inode=24701 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=178 name=(null) inode=24697 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=179 name=(null) inode=24702 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=180 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=181 name=(null) inode=24703 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=182 name=(null) inode=24703 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=183 name=(null) inode=24704 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=184 name=(null) inode=24703 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=185 name=(null) inode=24705 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=186 name=(null) inode=24703 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=187 name=(null) inode=24706 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=188 name=(null) inode=24703 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=189 name=(null) inode=24707 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=190 name=(null) inode=24703 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=191 name=(null) inode=24708 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=192 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=193 name=(null) inode=24709 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=194 name=(null) inode=24709 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=195 name=(null) inode=24710 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=196 name=(null) inode=24709 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=197 name=(null) inode=24711 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=198 name=(null) inode=24709 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=199 name=(null) inode=24712 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=200 name=(null) inode=24709 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=201 name=(null) inode=24713 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=202 name=(null) inode=24709 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=203 name=(null) inode=24714 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=204 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=205 name=(null) inode=24715 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=206 name=(null) inode=24715 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=207 name=(null) inode=24716 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=208 name=(null) inode=24715 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=209 name=(null) inode=24717 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=210 name=(null) inode=24715 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=211 name=(null) inode=24718 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=212 name=(null) inode=24715 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=213 name=(null) inode=24719 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=214 name=(null) inode=24715 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=215 name=(null) inode=24720 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=216 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=217 name=(null) inode=24721 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=218 name=(null) inode=24721 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=219 name=(null) inode=24722 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=220 name=(null) inode=24721 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=221 name=(null) inode=24723 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=222 name=(null) inode=24721 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=223 name=(null) inode=24724 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=224 name=(null) inode=24721 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=225 name=(null) inode=24725 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=226 name=(null) inode=24721 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=227 name=(null) inode=24726 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=228 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=229 name=(null) inode=24727 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=230 name=(null) inode=24727 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=231 name=(null) inode=24728 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=232 name=(null) inode=24727 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=233 name=(null) inode=24729 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=234 name=(null) inode=24727 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=235 name=(null) inode=24730 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=236 name=(null) inode=24727 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=237 name=(null) inode=24731 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=238 name=(null) inode=24727 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=239 name=(null) inode=24732 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=240 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=241 name=(null) inode=24733 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=242 name=(null) inode=24733 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=243 name=(null) inode=24734 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=244 name=(null) inode=24733 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=245 name=(null) inode=24735 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=246 name=(null) inode=24733 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=247 name=(null) inode=24736 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=248 name=(null) inode=24733 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=249 name=(null) inode=24737 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=250 name=(null) inode=24733 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=251 name=(null) inode=24738 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=252 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=253 name=(null) inode=24739 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=254 name=(null) inode=24739 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=255 name=(null) inode=24740 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=256 name=(null) inode=24739 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.716072 kernel: ipmi_ssif: IPMI SSIF Interface driver May 17 01:35:26.716093 kernel: ipmi_si: IPMI System Interface driver May 17 01:35:26.697000 audit: PATH item=257 name=(null) inode=24741 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=258 name=(null) inode=24739 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=259 name=(null) inode=24742 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=260 name=(null) inode=24739 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=261 name=(null) inode=24743 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=262 name=(null) inode=24739 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=263 name=(null) inode=24744 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=264 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=265 name=(null) inode=24745 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=266 name=(null) inode=24745 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=267 name=(null) inode=24746 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=268 name=(null) inode=24745 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=269 name=(null) inode=24747 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=270 name=(null) inode=24745 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=271 name=(null) inode=24748 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=272 name=(null) inode=24745 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=273 name=(null) inode=24749 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=274 name=(null) inode=24745 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=275 name=(null) inode=24750 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=276 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=277 name=(null) inode=24751 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=278 name=(null) inode=24751 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=279 name=(null) inode=24752 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=280 name=(null) inode=24751 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=281 name=(null) inode=24753 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=282 name=(null) inode=24751 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=283 name=(null) inode=24754 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=284 name=(null) inode=24751 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=285 name=(null) inode=24755 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=286 name=(null) inode=24751 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=287 name=(null) inode=24756 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=288 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=289 name=(null) inode=24757 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=290 name=(null) inode=24757 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=291 name=(null) inode=24758 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=292 name=(null) inode=24757 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=293 name=(null) inode=24759 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=294 name=(null) inode=24757 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=295 name=(null) inode=24760 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=296 name=(null) inode=24757 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=297 name=(null) inode=24761 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=298 name=(null) inode=24757 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=299 name=(null) inode=24762 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=300 name=(null) inode=24694 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=301 name=(null) inode=24763 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=302 name=(null) inode=24763 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=303 name=(null) inode=24764 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=304 name=(null) inode=24763 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=305 name=(null) inode=24765 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=306 name=(null) inode=24763 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=307 name=(null) inode=24766 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=308 name=(null) inode=24763 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=309 name=(null) inode=24767 dev=00:0a mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=310 name=(null) inode=24763 dev=00:0a mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PATH item=311 name=(null) inode=24768 dev=00:0a mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:35:26.697000 audit: PROCTITLE proctitle="(udev-worker)" May 17 01:35:26.730866 kernel: ipmi_si: Unable to find any System Interface(s) May 17 01:35:26.766245 systemd-networkd[2041]: bond0: netdev ready May 17 01:35:26.769587 systemd-networkd[2041]: lo: Link UP May 17 01:35:26.769593 systemd-networkd[2041]: lo: Gained carrier May 17 01:35:26.774920 systemd-networkd[2041]: Enumeration completed May 17 01:35:26.775037 systemd[1]: Started systemd-networkd.service. May 17 01:35:26.775406 systemd-networkd[2041]: bond0: Configuring with /etc/systemd/network/05-bond0.network. May 17 01:35:26.777451 systemd-networkd[2041]: enP1p1s0f1np1: Configuring with /etc/systemd/network/10-0c:42:a1:49:d8:3d.network. May 17 01:35:26.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.813473 systemd[1]: Finished systemd-udev-settle.service. May 17 01:35:26.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.817837 systemd[1]: Starting lvm2-activation-early.service... May 17 01:35:26.828691 lvm[2130]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:35:26.862862 systemd[1]: Finished lvm2-activation-early.service. May 17 01:35:26.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.865975 systemd[1]: Reached target cryptsetup.target. May 17 01:35:26.870087 systemd[1]: Starting lvm2-activation.service... May 17 01:35:26.874710 lvm[2133]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 01:35:26.909739 systemd[1]: Finished lvm2-activation.service. May 17 01:35:26.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.912978 systemd[1]: Reached target local-fs-pre.target. May 17 01:35:26.915858 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 01:35:26.915885 systemd[1]: Reached target local-fs.target. May 17 01:35:26.918715 systemd[1]: Reached target machines.target. May 17 01:35:26.922842 systemd[1]: Starting ldconfig.service... May 17 01:35:26.925929 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:35:26.925982 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:35:26.927247 systemd[1]: Starting systemd-boot-update.service... May 17 01:35:26.930875 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 01:35:26.935016 systemd[1]: Starting systemd-machine-id-commit.service... May 17 01:35:26.939300 systemd[1]: Starting systemd-sysext.service... May 17 01:35:26.942472 systemd[1]: boot.automount: Got automount request for /boot, triggered by 2137 (bootctl) May 17 01:35:26.943615 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 01:35:26.947044 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 01:35:26.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:26.953213 systemd[1]: Unmounting usr-share-oem.mount... May 17 01:35:26.957751 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 01:35:26.957951 systemd[1]: Unmounted usr-share-oem.mount. May 17 01:35:26.978050 kernel: loop0: detected capacity change from 0 to 203944 May 17 01:35:26.984923 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 01:35:26.985480 systemd[1]: Finished systemd-machine-id-commit.service. May 17 01:35:26.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.001641 systemd-fsck[2153]: fsck.fat 4.2 (2021-01-31) May 17 01:35:27.001641 systemd-fsck[2153]: /dev/nvme0n1p1: 236 files, 117182/258078 clusters May 17 01:35:27.002050 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 01:35:27.003109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 01:35:27.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.007987 systemd[1]: Mounting boot.mount... May 17 01:35:27.016730 systemd[1]: Mounted boot.mount. May 17 01:35:27.028147 systemd[1]: Finished systemd-boot-update.service. May 17 01:35:27.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.042050 kernel: loop1: detected capacity change from 0 to 203944 May 17 01:35:27.047150 (sd-sysext)[2165]: Using extensions 'kubernetes'. May 17 01:35:27.047503 (sd-sysext)[2165]: Merged extensions into '/usr'. May 17 01:35:27.062977 systemd[1]: Mounting usr-share-oem.mount... May 17 01:35:27.066533 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:35:27.067769 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:35:27.072647 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:35:27.077076 systemd[1]: Starting modprobe@loop.service... May 17 01:35:27.080473 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:35:27.080596 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:35:27.082987 systemd[1]: Mounted usr-share-oem.mount. May 17 01:35:27.086456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:35:27.086607 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:35:27.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.090069 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:35:27.090193 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:35:27.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.093552 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:35:27.093689 systemd[1]: Finished modprobe@loop.service. May 17 01:35:27.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.097192 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:35:27.097258 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:35:27.098174 systemd[1]: Finished systemd-sysext.service. May 17 01:35:27.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.102771 systemd[1]: Starting ensure-sysext.service... May 17 01:35:27.107509 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 01:35:27.113811 systemd[1]: Reloading. May 17 01:35:27.117575 systemd-tmpfiles[2179]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 01:35:27.118555 systemd-tmpfiles[2179]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 01:35:27.119779 systemd-tmpfiles[2179]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 01:35:27.121331 ldconfig[2136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 01:35:27.132909 /usr/lib/systemd/system-generators/torcx-generator[2202]: time="2025-05-17T01:35:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:35:27.132934 /usr/lib/systemd/system-generators/torcx-generator[2202]: time="2025-05-17T01:35:27Z" level=info msg="torcx already run" May 17 01:35:27.214491 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:35:27.214504 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:35:27.230881 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:35:27.283811 systemd[1]: Finished ldconfig.service. May 17 01:35:27.286000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.287990 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 01:35:27.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:27.293744 systemd[1]: Starting audit-rules.service... May 17 01:35:27.298222 systemd[1]: Starting clean-ca-certificates.service... May 17 01:35:27.302959 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 01:35:27.307978 systemd[1]: Starting systemd-resolved.service... May 17 01:35:27.312968 systemd[1]: Starting systemd-timesyncd.service... May 17 01:35:27.317570 systemd[1]: Starting systemd-update-utmp.service... May 17 01:35:27.321473 systemd[1]: Finished clean-ca-certificates.service. May 17 01:35:27.323000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 01:35:27.323000 audit[2287]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffecedc020 a2=420 a3=0 items=0 ppid=2265 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:27.323000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 01:35:27.323842 augenrules[2287]: No rules May 17 01:35:27.325347 systemd[1]: Finished audit-rules.service. May 17 01:35:27.328799 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 01:35:27.336623 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:35:27.337892 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:35:27.342595 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:35:27.346949 systemd[1]: Starting modprobe@loop.service... May 17 01:35:27.350494 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:35:27.350621 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:35:27.352202 systemd[1]: Starting systemd-update-done.service... May 17 01:35:27.355270 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:35:27.356323 systemd[1]: Finished systemd-update-utmp.service. May 17 01:35:27.359755 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:35:27.359894 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:35:27.363274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:35:27.363401 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:35:27.366660 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:35:27.366800 systemd[1]: Finished modprobe@loop.service. May 17 01:35:27.370438 systemd[1]: Finished systemd-update-done.service. May 17 01:35:27.375638 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:35:27.377076 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:35:27.381454 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:35:27.384510 systemd-resolved[2277]: Positive Trust Anchors: May 17 01:35:27.384520 systemd-resolved[2277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 01:35:27.384547 systemd-resolved[2277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 01:35:27.385325 systemd[1]: Starting modprobe@loop.service... May 17 01:35:27.388217 systemd-resolved[2277]: Using system hostname 'ci-3510.3.7-n-8226148b53'. May 17 01:35:27.388309 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:35:27.388421 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:35:27.388504 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:35:27.389162 systemd[1]: Started systemd-timesyncd.service. May 17 01:35:27.392376 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:35:27.392514 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:35:27.395367 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:35:27.395493 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:35:27.398500 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:35:27.398639 systemd[1]: Finished modprobe@loop.service. May 17 01:35:27.403449 systemd[1]: Reached target time-set.target. May 17 01:35:27.406348 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 01:35:27.407523 systemd[1]: Starting modprobe@dm_mod.service... May 17 01:35:27.411413 systemd[1]: Starting modprobe@drm.service... May 17 01:35:27.415233 systemd[1]: Starting modprobe@efi_pstore.service... May 17 01:35:27.419017 systemd[1]: Starting modprobe@loop.service... May 17 01:35:27.421758 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 01:35:27.421862 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:35:27.423281 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 01:35:27.425888 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 01:35:27.426921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 01:35:27.427062 systemd[1]: Finished modprobe@dm_mod.service. May 17 01:35:27.430078 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 01:35:27.430210 systemd[1]: Finished modprobe@drm.service. May 17 01:35:27.432963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 01:35:27.433164 systemd[1]: Finished modprobe@efi_pstore.service. May 17 01:35:27.435916 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 01:35:27.436062 systemd[1]: Finished modprobe@loop.service. May 17 01:35:27.439088 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 01:35:27.439149 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 01:35:27.440199 systemd[1]: Finished ensure-sysext.service. May 17 01:35:28.725059 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 01:35:28.740062 kernel: bond0: (slave enP1p1s0f1np1): Enslaving as a backup interface with an up link May 17 01:35:28.741596 systemd-networkd[2041]: enP1p1s0f0np0: Configuring with /etc/systemd/network/10-0c:42:a1:49:d8:3c.network. May 17 01:35:28.767059 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:35:28.887058 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond May 17 01:35:29.552057 kernel: mlx5_core 0001:01:00.0 enP1p1s0f0np0: Link up May 17 01:35:29.567049 kernel: bond0: (slave enP1p1s0f0np0): Enslaving as a backup interface with an up link May 17 01:35:29.567107 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready May 17 01:35:29.577462 systemd-networkd[2041]: bond0: Link UP May 17 01:35:29.577794 systemd-networkd[2041]: enP1p1s0f1np1: Link UP May 17 01:35:29.578029 systemd-networkd[2041]: enP1p1s0f1np1: Gained carrier May 17 01:35:29.578087 systemd[1]: Started systemd-resolved.service. May 17 01:35:29.579089 systemd-networkd[2041]: enP1p1s0f1np1: Reconfiguring with /etc/systemd/network/10-0c:42:a1:49:d8:3c.network. May 17 01:35:29.581049 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 17 01:35:29.581069 kernel: bond0: active interface up! May 17 01:35:29.582054 kernel: bond0: (slave enP1p1s0f0np0): link status definitely up, 25000 Mbps full duplex May 17 01:35:29.611525 systemd[1]: Reached target network.target. May 17 01:35:29.615005 systemd[1]: Reached target nss-lookup.target. May 17 01:35:29.618470 systemd[1]: Reached target sysinit.target. May 17 01:35:29.621978 systemd[1]: Started motdgen.path. May 17 01:35:29.625507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 01:35:29.629236 systemd[1]: Started logrotate.timer. May 17 01:35:29.632824 systemd[1]: Started mdadm.timer. May 17 01:35:29.636351 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 01:35:29.639900 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 01:35:29.639922 systemd[1]: Reached target paths.target. May 17 01:35:29.643408 systemd[1]: Reached target timers.target. May 17 01:35:29.647144 systemd[1]: Listening on dbus.socket. May 17 01:35:29.651979 systemd[1]: Starting docker.socket... May 17 01:35:29.656602 systemd[1]: Listening on sshd.socket. May 17 01:35:29.660267 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:35:29.660608 systemd[1]: Listening on docker.socket. May 17 01:35:29.664168 systemd[1]: Reached target sockets.target. May 17 01:35:29.667691 systemd[1]: Reached target basic.target. May 17 01:35:29.671310 systemd[1]: System is tainted: cgroupsv1 May 17 01:35:29.671348 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 01:35:29.671370 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 01:35:29.672568 systemd[1]: Starting containerd.service... May 17 01:35:29.677215 systemd[1]: Starting coreos-metadata-sshkeys@core.service... May 17 01:35:29.682157 systemd[1]: Starting coreos-metadata.service... May 17 01:35:29.687193 systemd[1]: Starting dbus.service... May 17 01:35:29.690049 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.704052 dbus-daemon[2332]: [system] SELinux support is enabled May 17 01:35:29.716049 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.716279 coreos-metadata[2326]: May 17 01:35:29.716 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:35:29.716962 systemd[1]: Starting enable-oem-cloudinit.service... May 17 01:35:29.719049 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.720056 coreos-metadata[2329]: May 17 01:35:29.720 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 May 17 01:35:29.720263 coreos-metadata[2326]: May 17 01:35:29.720 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:35:29.721049 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.721083 coreos-metadata[2329]: May 17 01:35:29.721 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:35:29.723382 jq[2334]: false May 17 01:35:29.758051 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.759424 systemd[1]: Starting extend-filesystems.service... May 17 01:35:29.761049 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.771673 extend-filesystems[2337]: Found loop1 May 17 01:35:29.839769 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.839807 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 233815889 blocks May 17 01:35:29.839851 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.839877 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.839896 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.811726 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1 May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1p1 May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1p2 May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1p3 May 17 01:35:29.840004 extend-filesystems[2337]: Found usr May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1p4 May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1p6 May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1p7 May 17 01:35:29.840004 extend-filesystems[2337]: Found nvme0n1p9 May 17 01:35:29.840004 extend-filesystems[2337]: Checking size of /dev/nvme0n1p9 May 17 01:35:29.840004 extend-filesystems[2337]: Resized partition /dev/nvme0n1p9 May 17 01:35:30.589584 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589630 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589645 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589658 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589702 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589734 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589778 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589820 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589833 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589846 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589859 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589872 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589885 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589907 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589939 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589952 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589966 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.589979 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590020 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590033 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590050 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590063 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590076 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590089 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590101 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590118 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590131 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590144 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590158 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590171 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590184 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590196 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590209 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 233815889 May 17 01:35:30.590222 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590235 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590269 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590283 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590295 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590308 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590321 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590334 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590346 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590359 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590372 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590384 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590397 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590410 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590423 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590438 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590451 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.590464 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.814204 systemd[1]: Starting motdgen.service... May 17 01:35:30.568106 dbus-daemon[2332]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 01:35:30.590839 extend-filesystems[2344]: resize2fs 1.46.5 (30-Dec-2021) May 17 01:35:30.590839 extend-filesystems[2344]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required May 17 01:35:30.590839 extend-filesystems[2344]: old_desc_blocks = 1, new_desc_blocks = 112 May 17 01:35:30.590839 extend-filesystems[2344]: The filesystem on /dev/nvme0n1p9 is now 233815889 (4k) blocks long. May 17 01:35:31.011133 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011179 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011228 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011255 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011280 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011298 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011316 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011329 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011342 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011355 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011367 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011380 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011408 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011436 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011452 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011465 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011478 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011491 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011504 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011517 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011530 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011543 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011558 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011571 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011583 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011596 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011609 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011622 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.011634 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.869790 systemd[1]: Starting prepare-helm.service... May 17 01:35:31.011845 coreos-metadata[2329]: May 17 01:35:30.721 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 01:35:31.011845 coreos-metadata[2329]: May 17 01:35:30.721 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:35:31.012094 coreos-metadata[2326]: May 17 01:35:30.720 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 May 17 01:35:31.012094 coreos-metadata[2326]: May 17 01:35:30.720 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Temporary failure in name resolution May 17 01:35:31.012285 extend-filesystems[2337]: Resized filesystem in /dev/nvme0n1p9 May 17 01:35:31.012285 extend-filesystems[2337]: Found nvme1n1 May 17 01:35:31.128223 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.128284 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.128313 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.128354 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.128379 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.128404 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.128434 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.128459 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:29.925531 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 01:35:29.983471 systemd[1]: Starting sshd-keygen.service... May 17 01:35:30.044765 systemd[1]: Starting systemd-logind.service... May 17 01:35:31.128836 sshd_keygen[2370]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 01:35:30.103322 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 01:35:31.128960 update_engine[2372]: I0517 01:35:30.140610 2372 main.cc:92] Flatcar Update Engine starting May 17 01:35:31.128960 update_engine[2372]: I0517 01:35:30.144105 2372 update_check_scheduler.cc:74] Next update check in 10m8s May 17 01:35:30.103629 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 01:35:31.129233 jq[2373]: true May 17 01:35:30.105941 systemd[1]: Starting update-engine.service... May 17 01:35:30.166663 systemd-logind[2371]: Watching system buttons on /dev/input/event0 (Power Button) May 17 01:35:31.129726 tar[2379]: linux-arm64/helm May 17 01:35:31.129726 tar[2379]: linux-arm64/LICENSE May 17 01:35:31.129726 tar[2379]: linux-arm64/README.md May 17 01:35:30.167670 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 01:35:31.129951 jq[2381]: true May 17 01:35:30.196554 systemd-logind[2371]: New seat seat0. May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.606288880Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.623030680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.623193040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.624687080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.1 May 17 01:35:31.130102 env[2382]: 82-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.624710440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.624941440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter May 17 01:35:31.130102 env[2382]: : skip plugin" type=io.containerd.snapshotter.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.624959160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.624970960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.624979840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.625054880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 01:35:31.130102 env[2382]: time="2025-05-17T01:35:30.625281080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 01:35:30.228531 systemd[1]: Started dbus.service. May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.625430920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.625446600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.625508120Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.625529400Z" level=info msg="metadata content store policy set" policy=shared May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.626664320Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.626686720Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.626698840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.626740920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.626753720Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.626766200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.626778920Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.627032600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 01:35:31.130594 env[2382]: time="2025-05-17T01:35:30.627053880Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 01:35:30.307526 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627067600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627079560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627090960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627201800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627271280Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627593640Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627616680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627628760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627909800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627924680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627935760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627946000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627957120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131054 env[2382]: time="2025-05-17T01:35:30.627970800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131358 bash[2414]: Updated "/home/core/.ssh/authorized_keys" May 17 01:35:30.307941 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.627982200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.627993040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628005160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628127600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628143040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628153800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628164320Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628213560Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628223520Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628240680Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 01:35:31.131538 env[2382]: time="2025-05-17T01:35:30.628280400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 01:35:30.308423 systemd[1]: motdgen.service: Deactivated successfully. May 17 01:35:30.308640 systemd[1]: Finished motdgen.service. May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.628840360Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.629000080Z" level=info msg="Connect containerd service" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.629079120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.629795080Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.629996920Z" level=info msg="Start subscribing containerd event" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630052960Z" level=info msg="Start recovering state" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630120840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630128880Z" level=info msg="Start event monitor" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630150560Z" level=info msg="Start snapshots syncer" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630160200Z" level=info msg="Start cni network conf syncer for default" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630161120Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630168080Z" level=info msg="Start streaming server" May 17 01:35:31.131859 env[2382]: time="2025-05-17T01:35:30.630201000Z" level=info msg="containerd successfully booted in 0.024576s" May 17 01:35:31.133616 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.133639 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:30.413758 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 01:35:30.414183 systemd[1]: Finished extend-filesystems.service. May 17 01:35:30.488010 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 01:35:30.488537 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 01:35:30.573128 systemd[1]: Started update-engine.service. May 17 01:35:30.641270 systemd[1]: Started containerd.service. May 17 01:35:30.693324 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 01:35:30.838020 systemd[1]: Started systemd-logind.service. May 17 01:35:30.929346 systemd[1]: Started locksmithd.service. May 17 01:35:31.041118 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 01:35:31.041563 systemd[1]: Reached target system-config.target. May 17 01:35:31.060814 locksmithd[2419]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 01:35:31.074647 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 01:35:31.074779 systemd[1]: Reached target user-config.target. May 17 01:35:31.160631 systemd[1]: Finished sshd-keygen.service. May 17 01:35:31.171050 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.174048 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.200058 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.200217 systemd[1]: Finished prepare-helm.service. May 17 01:35:31.202051 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.204050 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.243048 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.243970 systemd[1]: Starting issuegen.service... May 17 01:35:31.246048 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.274055 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.274024 systemd[1]: issuegen.service: Deactivated successfully. May 17 01:35:31.274215 systemd[1]: Finished issuegen.service. May 17 01:35:31.277049 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.281237 systemd-networkd[2041]: enP1p1s0f0np0: Link UP May 17 01:35:31.281620 systemd-networkd[2041]: bond0: Gained carrier May 17 01:35:31.281792 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:31.281797 systemd-networkd[2041]: enP1p1s0f0np0: Gained carrier May 17 01:35:31.304051 kernel: bond0: (slave enP1p1s0f1np1): link status down for interface, disabling it in 200 ms May 17 01:35:31.304076 kernel: bond0: (slave enP1p1s0f1np1): invalid new link 1 on slave May 17 01:35:31.316587 systemd[1]: Starting systemd-user-sessions.service... May 17 01:35:31.323223 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:31.323271 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:31.323490 systemd[1]: Finished systemd-user-sessions.service. May 17 01:35:31.323555 systemd-networkd[2041]: enP1p1s0f1np1: Link DOWN May 17 01:35:31.323560 systemd-networkd[2041]: enP1p1s0f1np1: Lost carrier May 17 01:35:31.328978 systemd[1]: Started getty@tty1.service. May 17 01:35:31.331226 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:31.331268 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:31.331647 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:31.334205 systemd[1]: Started serial-getty@ttyAMA0.service. May 17 01:35:31.338064 systemd[1]: Reached target getty.target. May 17 01:35:32.148061 kernel: mlx5_core 0001:01:00.1 enP1p1s0f1np1: Link up May 17 01:35:32.148328 kernel: bond0: (slave enP1p1s0f1np1): link status up again after 200 ms May 17 01:35:32.159053 kernel: bond0: (slave enP1p1s0f1np1): speed changed to 0 on port 1 May 17 01:35:32.169050 kernel: bond0: (slave enP1p1s0f1np1): link status up again after 200 ms May 17 01:35:32.170289 systemd-networkd[2041]: enP1p1s0f1np1: Link UP May 17 01:35:32.170486 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:32.170616 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:32.170637 systemd-networkd[2041]: enP1p1s0f1np1: Gained carrier May 17 01:35:32.193049 kernel: bond0: (slave enP1p1s0f1np1): link status definitely up, 25000 Mbps full duplex May 17 01:35:32.203225 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:32.203270 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:32.203416 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:32.296105 systemd-networkd[2041]: bond0: Gained IPv6LL May 17 01:35:32.296371 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:32.720898 coreos-metadata[2326]: May 17 01:35:32.720 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 01:35:32.721538 coreos-metadata[2329]: May 17 01:35:32.721 INFO Fetching https://metadata.packet.net/metadata: Attempt #3 May 17 01:35:33.192358 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:33.192537 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:33.194624 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 01:35:33.198702 systemd[1]: Reached target network-online.target. May 17 01:35:33.204293 systemd[1]: Starting kubelet.service... May 17 01:35:33.892472 systemd[1]: Started kubelet.service. May 17 01:35:33.963054 kernel: mlx5_core 0001:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 May 17 01:35:34.349571 kubelet[2457]: E0517 01:35:34.349508 2457 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:35:34.351105 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:35:34.351312 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:35:36.349181 login[2444]: pam_lastlog(login:session): file /var/log/lastlog is locked/write May 17 01:35:36.350582 login[2445]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:35:36.359143 systemd-logind[2371]: New session 1 of user core. May 17 01:35:36.360742 systemd[1]: Created slice user-500.slice. May 17 01:35:36.361837 systemd[1]: Starting user-runtime-dir@500.service... May 17 01:35:36.370688 systemd[1]: Finished user-runtime-dir@500.service. May 17 01:35:36.372088 systemd[1]: Starting user@500.service... May 17 01:35:36.374899 (systemd)[2486]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:36.454174 systemd[2486]: Queued start job for default target default.target. May 17 01:35:36.454357 systemd[2486]: Reached target paths.target. May 17 01:35:36.454372 systemd[2486]: Reached target sockets.target. May 17 01:35:36.454383 systemd[2486]: Reached target timers.target. May 17 01:35:36.454392 systemd[2486]: Reached target basic.target. May 17 01:35:36.454428 systemd[2486]: Reached target default.target. May 17 01:35:36.454449 systemd[2486]: Startup finished in 74ms. May 17 01:35:36.454519 systemd[1]: Started user@500.service. May 17 01:35:36.455798 systemd[1]: Started session-1.scope. May 17 01:35:37.141297 kernel: mlx5_core 0001:01:00.0: modify lag map port 1:2 port 2:2 May 17 01:35:37.141556 kernel: mlx5_core 0001:01:00.0: modify lag map port 1:1 port 2:2 May 17 01:35:37.349503 login[2444]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 01:35:37.352226 systemd-logind[2371]: New session 2 of user core. May 17 01:35:37.353736 systemd[1]: Started session-2.scope. May 17 01:35:38.148174 systemd[1]: Created slice system-sshd.slice. May 17 01:35:38.149614 systemd[1]: Started sshd@0-86.109.9.158:22-147.75.109.163:48368.service. May 17 01:35:38.448512 sshd[2508]: Accepted publickey for core from 147.75.109.163 port 48368 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:35:38.449931 sshd[2508]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:38.452279 systemd-logind[2371]: New session 3 of user core. May 17 01:35:38.453637 systemd[1]: Started session-3.scope. May 17 01:35:38.576925 coreos-metadata[2329]: May 17 01:35:38.576 INFO Fetch successful May 17 01:35:38.629740 systemd[1]: Finished coreos-metadata.service. May 17 01:35:38.631525 systemd[1]: Started packet-phone-home.service. May 17 01:35:38.637916 curl[2518]: % Total % Received % Xferd Average Speed Time Time Time Current May 17 01:35:38.638097 curl[2518]: Dload Upload Total Spent Left Speed May 17 01:35:38.660074 coreos-metadata[2326]: May 17 01:35:38.660 INFO Fetch successful May 17 01:35:38.695240 systemd[1]: Started sshd@1-86.109.9.158:22-147.75.109.163:55734.service. May 17 01:35:38.713273 unknown[2326]: wrote ssh authorized keys file for user: core May 17 01:35:38.727996 update-ssh-keys[2522]: Updated "/home/core/.ssh/authorized_keys" May 17 01:35:38.728498 systemd[1]: Finished coreos-metadata-sshkeys@core.service. May 17 01:35:38.728754 systemd[1]: Reached target multi-user.target. May 17 01:35:38.730168 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 01:35:38.736157 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 01:35:38.736342 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 01:35:38.737097 systemd[1]: Startup finished in 23.864s (kernel) + 15.887s (userspace) = 39.752s. May 17 01:35:38.944328 curl[2518]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 May 17 01:35:38.945229 systemd[1]: packet-phone-home.service: Deactivated successfully. May 17 01:35:38.963603 sshd[2519]: Accepted publickey for core from 147.75.109.163 port 55734 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:35:38.964684 sshd[2519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:38.967005 systemd-logind[2371]: New session 4 of user core. May 17 01:35:38.967745 systemd[1]: Started session-4.scope. May 17 01:35:39.164515 sshd[2519]: pam_unix(sshd:session): session closed for user core May 17 01:35:39.166735 systemd[1]: sshd@1-86.109.9.158:22-147.75.109.163:55734.service: Deactivated successfully. May 17 01:35:39.167439 systemd-logind[2371]: Session 4 logged out. Waiting for processes to exit. May 17 01:35:39.167500 systemd[1]: session-4.scope: Deactivated successfully. May 17 01:35:39.167944 systemd-logind[2371]: Removed session 4. May 17 01:35:39.204262 systemd[1]: Started sshd@2-86.109.9.158:22-147.75.109.163:55750.service. May 17 01:35:39.480110 sshd[2535]: Accepted publickey for core from 147.75.109.163 port 55750 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:35:39.481093 sshd[2535]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:39.483327 systemd-logind[2371]: New session 5 of user core. May 17 01:35:39.484063 systemd[1]: Started session-5.scope. May 17 01:35:39.681927 sshd[2535]: pam_unix(sshd:session): session closed for user core May 17 01:35:39.683727 systemd[1]: sshd@2-86.109.9.158:22-147.75.109.163:55750.service: Deactivated successfully. May 17 01:35:39.684348 systemd-logind[2371]: Session 5 logged out. Waiting for processes to exit. May 17 01:35:39.684406 systemd[1]: session-5.scope: Deactivated successfully. May 17 01:35:39.684825 systemd-logind[2371]: Removed session 5. May 17 01:35:39.720993 systemd[1]: Started sshd@3-86.109.9.158:22-147.75.109.163:55758.service. May 17 01:35:39.990945 sshd[2542]: Accepted publickey for core from 147.75.109.163 port 55758 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:35:39.991911 sshd[2542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:39.994161 systemd-logind[2371]: New session 6 of user core. May 17 01:35:39.994891 systemd[1]: Started session-6.scope. May 17 01:35:40.195555 sshd[2542]: pam_unix(sshd:session): session closed for user core May 17 01:35:40.197507 systemd[1]: sshd@3-86.109.9.158:22-147.75.109.163:55758.service: Deactivated successfully. May 17 01:35:40.198161 systemd-logind[2371]: Session 6 logged out. Waiting for processes to exit. May 17 01:35:40.198223 systemd[1]: session-6.scope: Deactivated successfully. May 17 01:35:40.198673 systemd-logind[2371]: Removed session 6. May 17 01:35:40.242500 systemd[1]: Started sshd@4-86.109.9.158:22-147.75.109.163:55768.service. May 17 01:35:40.541679 sshd[2550]: Accepted publickey for core from 147.75.109.163 port 55768 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:35:40.542623 sshd[2550]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:40.544832 systemd-logind[2371]: New session 7 of user core. May 17 01:35:40.545571 systemd[1]: Started session-7.scope. May 17 01:35:40.723778 sudo[2554]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 01:35:40.723982 sudo[2554]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 01:35:40.739223 dbus-daemon[2332]: avc: received setenforce notice (enforcing=1) May 17 01:35:40.740976 sudo[2554]: pam_unix(sudo:session): session closed for user root May 17 01:35:40.785962 sshd[2550]: pam_unix(sshd:session): session closed for user core May 17 01:35:40.788216 systemd[1]: sshd@4-86.109.9.158:22-147.75.109.163:55768.service: Deactivated successfully. May 17 01:35:40.788883 systemd-logind[2371]: Session 7 logged out. Waiting for processes to exit. May 17 01:35:40.788945 systemd[1]: session-7.scope: Deactivated successfully. May 17 01:35:40.789473 systemd-logind[2371]: Removed session 7. May 17 01:35:40.820953 systemd[1]: Started sshd@5-86.109.9.158:22-147.75.109.163:55772.service. May 17 01:35:41.093932 sshd[2559]: Accepted publickey for core from 147.75.109.163 port 55772 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:35:41.094918 sshd[2559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:41.097146 systemd-logind[2371]: New session 8 of user core. May 17 01:35:41.097943 systemd[1]: Started session-8.scope. May 17 01:35:41.258362 sudo[2564]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 01:35:41.258569 sudo[2564]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 01:35:41.260631 sudo[2564]: pam_unix(sudo:session): session closed for user root May 17 01:35:41.264666 sudo[2563]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 01:35:41.264860 sudo[2563]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 01:35:41.271880 systemd[1]: Stopping audit-rules.service... May 17 01:35:41.273000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 01:35:41.273227 auditctl[2567]: No rules May 17 01:35:41.278576 kernel: kauditd_printk_skb: 358 callbacks suppressed May 17 01:35:41.278604 kernel: audit: type=1305 audit(1747445741.273:140): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 May 17 01:35:41.279052 systemd[1]: audit-rules.service: Deactivated successfully. May 17 01:35:41.279251 systemd[1]: Stopped audit-rules.service. May 17 01:35:41.280839 systemd[1]: Starting audit-rules.service... May 17 01:35:41.273000 audit[2567]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc7058d0 a2=420 a3=0 items=0 ppid=1 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:41.297065 augenrules[2585]: No rules May 17 01:35:41.297652 systemd[1]: Finished audit-rules.service. May 17 01:35:41.298341 sudo[2563]: pam_unix(sudo:session): session closed for user root May 17 01:35:41.324169 kernel: audit: type=1300 audit(1747445741.273:140): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc7058d0 a2=420 a3=0 items=0 ppid=1 pid=2567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:41.324212 kernel: audit: type=1327 audit(1747445741.273:140): proctitle=2F7362696E2F617564697463746C002D44 May 17 01:35:41.273000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 May 17 01:35:41.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:41.337337 sshd[2559]: pam_unix(sshd:session): session closed for user core May 17 01:35:41.339208 systemd-logind[2371]: Session 8 logged out. Waiting for processes to exit. May 17 01:35:41.339402 systemd[1]: sshd@5-86.109.9.158:22-147.75.109.163:55772.service: Deactivated successfully. May 17 01:35:41.340028 systemd[1]: session-8.scope: Deactivated successfully. May 17 01:35:41.340369 systemd-logind[2371]: Removed session 8. May 17 01:35:41.355379 kernel: audit: type=1131 audit(1747445741.279:141): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:41.355408 kernel: audit: type=1130 audit(1747445741.297:142): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:41.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:41.298000 audit[2563]: USER_END pid=2563 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:35:41.382545 systemd[1]: Started sshd@6-86.109.9.158:22-147.75.109.163:55778.service. May 17 01:35:41.407358 kernel: audit: type=1106 audit(1747445741.298:143): pid=2563 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:35:41.407387 kernel: audit: type=1104 audit(1747445741.298:144): pid=2563 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:35:41.298000 audit[2563]: CRED_DISP pid=2563 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:35:41.337000 audit[2559]: USER_END pid=2559 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.495557 kernel: audit: type=1106 audit(1747445741.337:145): pid=2559 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.495580 kernel: audit: type=1104 audit(1747445741.337:146): pid=2559 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.337000 audit[2559]: CRED_DISP pid=2559 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.537088 kernel: audit: type=1131 audit(1747445741.339:147): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-86.109.9.158:22-147.75.109.163:55772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:41.339000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-86.109.9.158:22-147.75.109.163:55772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:41.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-86.109.9.158:22-147.75.109.163:55778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:41.651000 audit[2592]: USER_ACCT pid=2592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.652077 sshd[2592]: Accepted publickey for core from 147.75.109.163 port 55778 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:35:41.652000 audit[2592]: CRED_ACQ pid=2592 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.652000 audit[2592]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffda11fa70 a2=3 a3=1 items=0 ppid=1 pid=2592 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:41.652000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:35:41.653043 sshd[2592]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:35:41.655361 systemd-logind[2371]: New session 9 of user core. May 17 01:35:41.656131 systemd[1]: Started session-9.scope. May 17 01:35:41.658000 audit[2592]: USER_START pid=2592 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.659000 audit[2595]: CRED_ACQ pid=2595 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:35:41.815000 audit[2596]: USER_ACCT pid=2596 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:35:41.815377 sudo[2596]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 01:35:41.815000 audit[2596]: CRED_REFR pid=2596 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:35:41.815591 sudo[2596]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 01:35:41.816000 audit[2596]: USER_START pid=2596 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:35:41.852809 systemd[1]: Starting docker.service... May 17 01:35:41.904145 env[2612]: time="2025-05-17T01:35:41.904070520Z" level=info msg="Starting up" May 17 01:35:41.905617 env[2612]: time="2025-05-17T01:35:41.905596240Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 01:35:41.905617 env[2612]: time="2025-05-17T01:35:41.905616480Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 01:35:41.905684 env[2612]: time="2025-05-17T01:35:41.905634920Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 01:35:41.905684 env[2612]: time="2025-05-17T01:35:41.905645320Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 01:35:41.908505 env[2612]: time="2025-05-17T01:35:41.908487160Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 01:35:41.908573 env[2612]: time="2025-05-17T01:35:41.908504520Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 01:35:41.908573 env[2612]: time="2025-05-17T01:35:41.908517680Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 01:35:41.908573 env[2612]: time="2025-05-17T01:35:41.908528400Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 01:35:42.040076 env[2612]: time="2025-05-17T01:35:42.040053800Z" level=warning msg="Your kernel does not support cgroup blkio weight" May 17 01:35:42.040076 env[2612]: time="2025-05-17T01:35:42.040071720Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" May 17 01:35:42.040193 env[2612]: time="2025-05-17T01:35:42.040183640Z" level=info msg="Loading containers: start." May 17 01:35:42.084000 audit[2664]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2664 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.084000 audit[2664]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd8d94420 a2=0 a3=1 items=0 ppid=2612 pid=2664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.084000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 May 17 01:35:42.085000 audit[2666]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2666 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.085000 audit[2666]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe6736790 a2=0 a3=1 items=0 ppid=2612 pid=2666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.085000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 May 17 01:35:42.087000 audit[2668]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2668 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.087000 audit[2668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe67bf690 a2=0 a3=1 items=0 ppid=2612 pid=2668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.087000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 01:35:42.089000 audit[2670]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.089000 audit[2670]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff15f6420 a2=0 a3=1 items=0 ppid=2612 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.089000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 01:35:42.091000 audit[2672]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.091000 audit[2672]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe4ac8fb0 a2=0 a3=1 items=0 ppid=2612 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.091000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E May 17 01:35:42.128000 audit[2677]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2677 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.128000 audit[2677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe21d5150 a2=0 a3=1 items=0 ppid=2612 pid=2677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.128000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E May 17 01:35:42.131000 audit[2679]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2679 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.131000 audit[2679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe19d05f0 a2=0 a3=1 items=0 ppid=2612 pid=2679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.131000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 May 17 01:35:42.133000 audit[2681]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2681 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.133000 audit[2681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffdd419e90 a2=0 a3=1 items=0 ppid=2612 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.133000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E May 17 01:35:42.135000 audit[2683]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2683 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.135000 audit[2683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffea54b020 a2=0 a3=1 items=0 ppid=2612 pid=2683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.135000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 01:35:42.139000 audit[2687]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2687 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.139000 audit[2687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff2f70aa0 a2=0 a3=1 items=0 ppid=2612 pid=2687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.139000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 01:35:42.147000 audit[2688]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2688 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.147000 audit[2688]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffebcf7fa0 a2=0 a3=1 items=0 ppid=2612 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.147000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 01:35:42.166052 kernel: Initializing XFRM netlink socket May 17 01:35:42.193776 env[2612]: time="2025-05-17T01:35:42.193737560Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 01:35:42.194649 systemd-timesyncd[2281]: Network configuration changed, trying to establish connection. May 17 01:35:42.207000 audit[2696]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2696 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.207000 audit[2696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff25d3330 a2=0 a3=1 items=0 ppid=2612 pid=2696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.207000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 May 17 01:35:42.218000 audit[2699]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2699 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.218000 audit[2699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff1cd8c40 a2=0 a3=1 items=0 ppid=2612 pid=2699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.218000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E May 17 01:35:42.221000 audit[2702]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.221000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffc322690 a2=0 a3=1 items=0 ppid=2612 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.221000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 May 17 01:35:42.223000 audit[2704]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2704 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.223000 audit[2704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc38ef180 a2=0 a3=1 items=0 ppid=2612 pid=2704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.223000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 May 17 01:35:42.224000 audit[2706]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2706 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.224000 audit[2706]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe1922760 a2=0 a3=1 items=0 ppid=2612 pid=2706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.224000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 May 17 01:35:42.226000 audit[2708]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2708 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.226000 audit[2708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffc1bcf1e0 a2=0 a3=1 items=0 ppid=2612 pid=2708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.226000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 May 17 01:35:42.228000 audit[2710]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2710 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.228000 audit[2710]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffd9cf9750 a2=0 a3=1 items=0 ppid=2612 pid=2710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.228000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 May 17 01:35:42.235000 audit[2713]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2713 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.235000 audit[2713]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffffe78fd50 a2=0 a3=1 items=0 ppid=2612 pid=2713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.235000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 May 17 01:35:42.237000 audit[2715]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2715 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.237000 audit[2715]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffcb55fe30 a2=0 a3=1 items=0 ppid=2612 pid=2715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.237000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 May 17 01:35:42.239000 audit[2717]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2717 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.239000 audit[2717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcd3e17b0 a2=0 a3=1 items=0 ppid=2612 pid=2717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.239000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 May 17 01:35:42.240000 audit[2719]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2719 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.240000 audit[2719]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc4eb72c0 a2=0 a3=1 items=0 ppid=2612 pid=2719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.240000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 May 17 01:35:42.241178 systemd-networkd[2041]: docker0: Link UP May 17 01:35:42.245000 audit[2723]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2723 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.245000 audit[2723]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff84a3110 a2=0 a3=1 items=0 ppid=2612 pid=2723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.245000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 May 17 01:35:42.258000 audit[2724]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2724 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:35:42.258000 audit[2724]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffde1200d0 a2=0 a3=1 items=0 ppid=2612 pid=2724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:35:42.258000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 May 17 01:35:42.259245 env[2612]: time="2025-05-17T01:35:42.259224680Z" level=info msg="Loading containers: done." May 17 01:35:42.272070 env[2612]: time="2025-05-17T01:35:42.272040640Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 01:35:42.272193 env[2612]: time="2025-05-17T01:35:42.272181800Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 01:35:42.272290 env[2612]: time="2025-05-17T01:35:42.272278800Z" level=info msg="Daemon has completed initialization" May 17 01:35:42.273039 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1064740700-merged.mount: Deactivated successfully. May 17 01:35:42.279927 systemd[1]: Started docker.service. May 17 01:35:42.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:42.285110 env[2612]: time="2025-05-17T01:35:42.285075840Z" level=info msg="API listen on /run/docker.sock" May 17 01:35:43.124437 systemd-resolved[2277]: Clock change detected. Flushing caches. May 17 01:35:43.124604 systemd-timesyncd[2281]: Contacted time server [2606:4700:f1::1]:123 (2.flatcar.pool.ntp.org). May 17 01:35:43.124650 systemd-timesyncd[2281]: Initial clock synchronization to Sat 2025-05-17 01:35:43.124397 UTC. May 17 01:35:43.705547 env[2382]: time="2025-05-17T01:35:43.705514960Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 01:35:44.527120 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204444852.mount: Deactivated successfully. May 17 01:35:45.272533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 01:35:45.272776 systemd[1]: Stopped kubelet.service. May 17 01:35:45.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:45.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:45.274884 systemd[1]: Starting kubelet.service... May 17 01:35:45.367782 systemd[1]: Started kubelet.service. May 17 01:35:45.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:45.400405 kubelet[2815]: E0517 01:35:45.400373 2815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:35:45.402624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:35:45.402800 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:35:45.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 01:35:46.323937 env[2382]: time="2025-05-17T01:35:46.323900360Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:46.324844 env[2382]: time="2025-05-17T01:35:46.324823480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:46.326422 env[2382]: time="2025-05-17T01:35:46.326403520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:46.328348 env[2382]: time="2025-05-17T01:35:46.328312960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:46.329091 env[2382]: time="2025-05-17T01:35:46.329070920Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 17 01:35:46.331411 env[2382]: time="2025-05-17T01:35:46.331381440Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 01:35:47.748911 env[2382]: time="2025-05-17T01:35:47.748871960Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:47.749747 env[2382]: time="2025-05-17T01:35:47.749728960Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:47.751419 env[2382]: time="2025-05-17T01:35:47.751393080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:47.752996 env[2382]: time="2025-05-17T01:35:47.752971600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:47.753851 env[2382]: time="2025-05-17T01:35:47.753822040Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 17 01:35:47.754203 env[2382]: time="2025-05-17T01:35:47.754179880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 01:35:49.072290 env[2382]: time="2025-05-17T01:35:49.072251560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:49.073260 env[2382]: time="2025-05-17T01:35:49.073233240Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:49.074972 env[2382]: time="2025-05-17T01:35:49.074951720Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:49.076480 env[2382]: time="2025-05-17T01:35:49.076458840Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:49.077207 env[2382]: time="2025-05-17T01:35:49.077182720Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 17 01:35:49.077578 env[2382]: time="2025-05-17T01:35:49.077559600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 01:35:50.107366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470800874.mount: Deactivated successfully. May 17 01:35:50.526414 env[2382]: time="2025-05-17T01:35:50.526336160Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:50.527092 env[2382]: time="2025-05-17T01:35:50.527067440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:50.529591 env[2382]: time="2025-05-17T01:35:50.529566760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:50.530599 env[2382]: time="2025-05-17T01:35:50.530575400Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:50.531122 env[2382]: time="2025-05-17T01:35:50.531091400Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 17 01:35:50.531532 env[2382]: time="2025-05-17T01:35:50.531512080Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 01:35:51.101066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2004974524.mount: Deactivated successfully. May 17 01:35:52.079206 env[2382]: time="2025-05-17T01:35:52.079163720Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.080235 env[2382]: time="2025-05-17T01:35:52.080207960Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.082130 env[2382]: time="2025-05-17T01:35:52.082107040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.083910 env[2382]: time="2025-05-17T01:35:52.083884760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.084777 env[2382]: time="2025-05-17T01:35:52.084754400Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 17 01:35:52.085558 env[2382]: time="2025-05-17T01:35:52.085541800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 01:35:52.523616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount665979692.mount: Deactivated successfully. May 17 01:35:52.523918 env[2382]: time="2025-05-17T01:35:52.523783080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.524394 env[2382]: time="2025-05-17T01:35:52.524376720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.525527 env[2382]: time="2025-05-17T01:35:52.525508000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.526608 env[2382]: time="2025-05-17T01:35:52.526593760Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:52.527147 env[2382]: time="2025-05-17T01:35:52.527121800Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 17 01:35:52.527441 env[2382]: time="2025-05-17T01:35:52.527424280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 01:35:53.061045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2922773677.mount: Deactivated successfully. May 17 01:35:55.363279 env[2382]: time="2025-05-17T01:35:55.363233600Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:55.365336 env[2382]: time="2025-05-17T01:35:55.365294360Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:55.368875 env[2382]: time="2025-05-17T01:35:55.368836600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:55.370885 env[2382]: time="2025-05-17T01:35:55.370849280Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:35:55.371934 env[2382]: time="2025-05-17T01:35:55.371900440Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 17 01:35:55.624536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 01:35:55.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:55.624722 systemd[1]: Stopped kubelet.service. May 17 01:35:55.627939 systemd[1]: Starting kubelet.service... May 17 01:35:55.637804 kernel: kauditd_printk_skb: 88 callbacks suppressed May 17 01:35:55.637931 kernel: audit: type=1130 audit(1747445755.623:186): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:55.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:55.720867 kernel: audit: type=1131 audit(1747445755.623:187): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:55.721737 systemd[1]: Started kubelet.service. May 17 01:35:55.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:55.760691 kubelet[2939]: E0517 01:35:55.760664 2939 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 01:35:55.762751 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 01:35:55.762878 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 01:35:55.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 01:35:55.809410 kernel: audit: type=1130 audit(1747445755.721:188): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:55.809441 kernel: audit: type=1131 audit(1747445755.762:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' May 17 01:35:59.637899 systemd[1]: Stopped kubelet.service. May 17 01:35:59.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.641737 systemd[1]: Starting kubelet.service... May 17 01:35:59.667997 systemd[1]: Reloading. May 17 01:35:59.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.712824 /usr/lib/systemd/system-generators/torcx-generator[2999]: time="2025-05-17T01:35:59Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:35:59.712848 /usr/lib/systemd/system-generators/torcx-generator[2999]: time="2025-05-17T01:35:59Z" level=info msg="torcx already run" May 17 01:35:59.719518 kernel: audit: type=1130 audit(1747445759.637:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.719550 kernel: audit: type=1131 audit(1747445759.637:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.797295 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:35:59.797306 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:35:59.813521 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:35:59.882699 systemd[1]: Started kubelet.service. May 17 01:35:59.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.886383 systemd[1]: Stopping kubelet.service... May 17 01:35:59.886696 systemd[1]: kubelet.service: Deactivated successfully. May 17 01:35:59.886956 systemd[1]: Stopped kubelet.service. May 17 01:35:59.890562 systemd[1]: Starting kubelet.service... May 17 01:35:59.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.963551 kernel: audit: type=1130 audit(1747445759.881:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.963591 kernel: audit: type=1131 audit(1747445759.886:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:35:59.984181 systemd[1]: Started kubelet.service. May 17 01:35:59.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:36:00.020612 kubelet[3075]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:36:00.020612 kubelet[3075]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 01:36:00.020612 kubelet[3075]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:36:00.020864 kubelet[3075]: I0517 01:36:00.020666 3075 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:36:00.025862 kernel: audit: type=1130 audit(1747445759.983:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:36:00.830622 kubelet[3075]: I0517 01:36:00.830593 3075 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 01:36:00.830622 kubelet[3075]: I0517 01:36:00.830617 3075 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:36:00.830829 kubelet[3075]: I0517 01:36:00.830819 3075 server.go:934] "Client rotation is on, will bootstrap in background" May 17 01:36:00.847299 kubelet[3075]: E0517 01:36:00.847273 3075 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://86.109.9.158:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 86.109.9.158:6443: connect: connection refused" logger="UnhandledError" May 17 01:36:00.848041 kubelet[3075]: I0517 01:36:00.848023 3075 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:36:00.858007 kubelet[3075]: E0517 01:36:00.857986 3075 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:36:00.858036 kubelet[3075]: I0517 01:36:00.858011 3075 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:36:00.881187 kubelet[3075]: I0517 01:36:00.881166 3075 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:36:00.882287 kubelet[3075]: I0517 01:36:00.882271 3075 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 01:36:00.882418 kubelet[3075]: I0517 01:36:00.882395 3075 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:36:00.882579 kubelet[3075]: I0517 01:36:00.882421 3075 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-8226148b53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 01:36:00.882661 kubelet[3075]: I0517 01:36:00.882587 3075 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:36:00.882661 kubelet[3075]: I0517 01:36:00.882596 3075 container_manager_linux.go:300] "Creating device plugin manager" May 17 01:36:00.882830 kubelet[3075]: I0517 01:36:00.882820 3075 state_mem.go:36] "Initialized new in-memory state store" May 17 01:36:00.889339 kubelet[3075]: I0517 01:36:00.889324 3075 kubelet.go:408] "Attempting to sync node with API server" May 17 01:36:00.889366 kubelet[3075]: I0517 01:36:00.889349 3075 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:36:00.889393 kubelet[3075]: I0517 01:36:00.889372 3075 kubelet.go:314] "Adding apiserver pod source" May 17 01:36:00.889447 kubelet[3075]: I0517 01:36:00.889441 3075 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:36:00.901107 kubelet[3075]: W0517 01:36:00.901073 3075 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://86.109.9.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8226148b53&limit=500&resourceVersion=0": dial tcp 86.109.9.158:6443: connect: connection refused May 17 01:36:00.901136 kubelet[3075]: E0517 01:36:00.901122 3075 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://86.109.9.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8226148b53&limit=500&resourceVersion=0\": dial tcp 86.109.9.158:6443: connect: connection refused" logger="UnhandledError" May 17 01:36:00.918459 kubelet[3075]: W0517 01:36:00.918415 3075 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://86.109.9.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 86.109.9.158:6443: connect: connection refused May 17 01:36:00.918508 kubelet[3075]: E0517 01:36:00.918463 3075 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://86.109.9.158:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 86.109.9.158:6443: connect: connection refused" logger="UnhandledError" May 17 01:36:00.918917 kubelet[3075]: I0517 01:36:00.918896 3075 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 01:36:00.919569 kubelet[3075]: I0517 01:36:00.919558 3075 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:36:00.919673 kubelet[3075]: W0517 01:36:00.919665 3075 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 01:36:00.920654 kubelet[3075]: I0517 01:36:00.920642 3075 server.go:1274] "Started kubelet" May 17 01:36:00.920765 kubelet[3075]: I0517 01:36:00.920708 3075 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:36:00.920948 kubelet[3075]: I0517 01:36:00.920909 3075 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:36:00.920990 kubelet[3075]: I0517 01:36:00.920979 3075 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:36:00.921000 audit[3075]: AVC avc: denied { mac_admin } for pid=3075 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:00.922069 kubelet[3075]: I0517 01:36:00.921993 3075 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 01:36:00.922069 kubelet[3075]: I0517 01:36:00.922031 3075 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 01:36:00.922129 kubelet[3075]: I0517 01:36:00.922091 3075 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:36:00.922129 kubelet[3075]: I0517 01:36:00.922107 3075 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:36:00.922268 kubelet[3075]: I0517 01:36:00.922143 3075 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 01:36:00.922268 kubelet[3075]: I0517 01:36:00.922195 3075 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 01:36:00.922268 kubelet[3075]: I0517 01:36:00.922220 3075 reconciler.go:26] "Reconciler: start to sync state" May 17 01:36:00.922866 kubelet[3075]: E0517 01:36:00.922746 3075 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:00.923257 kubelet[3075]: I0517 01:36:00.923240 3075 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:36:00.923585 kubelet[3075]: I0517 01:36:00.923571 3075 server.go:449] "Adding debug handlers to kubelet server" May 17 01:36:00.924297 kubelet[3075]: E0517 01:36:00.924281 3075 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:36:00.925029 kubelet[3075]: I0517 01:36:00.925016 3075 factory.go:221] Registration of the containerd container factory successfully May 17 01:36:00.925053 kubelet[3075]: I0517 01:36:00.925030 3075 factory.go:221] Registration of the systemd container factory successfully May 17 01:36:00.928874 kubelet[3075]: W0517 01:36:00.928831 3075 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://86.109.9.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 86.109.9.158:6443: connect: connection refused May 17 01:36:00.928902 kubelet[3075]: E0517 01:36:00.928889 3075 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://86.109.9.158:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 86.109.9.158:6443: connect: connection refused" logger="UnhandledError" May 17 01:36:00.929447 kubelet[3075]: E0517 01:36:00.929240 3075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.9.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8226148b53?timeout=10s\": dial tcp 86.109.9.158:6443: connect: connection refused" interval="200ms" May 17 01:36:00.937597 kubelet[3075]: E0517 01:36:00.931257 3075 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://86.109.9.158:6443/api/v1/namespaces/default/events\": dial tcp 86.109.9.158:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-8226148b53.18402ca943ef8d20 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-8226148b53,UID:ci-3510.3.7-n-8226148b53,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-8226148b53,},FirstTimestamp:2025-05-17 01:36:00.92062032 +0000 UTC m=+0.930203521,LastTimestamp:2025-05-17 01:36:00.92062032 +0000 UTC m=+0.930203521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-8226148b53,}" May 17 01:36:00.921000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:00.985892 kernel: audit: type=1400 audit(1747445760.921:195): avc: denied { mac_admin } for pid=3075 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:00.985934 kernel: audit: type=1401 audit(1747445760.921:195): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:00.986001 kernel: audit: type=1300 audit(1747445760.921:195): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c12c60 a1=4000e90cf0 a2=4000c12c30 a3=25 items=0 ppid=1 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.921000 audit[3075]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000c12c60 a1=4000e90cf0 a2=4000c12c30 a3=25 items=0 ppid=1 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.989651 kubelet[3075]: I0517 01:36:00.989584 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:36:00.990803 kubelet[3075]: I0517 01:36:00.990788 3075 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 01:36:00.990827 kubelet[3075]: I0517 01:36:00.990804 3075 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 01:36:00.990827 kubelet[3075]: I0517 01:36:00.990825 3075 state_mem.go:36] "Initialized new in-memory state store" May 17 01:36:00.991375 kubelet[3075]: I0517 01:36:00.991364 3075 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:36:00.991402 kubelet[3075]: I0517 01:36:00.991383 3075 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 01:36:00.991402 kubelet[3075]: I0517 01:36:00.991402 3075 kubelet.go:2321] "Starting kubelet main sync loop" May 17 01:36:00.991442 kubelet[3075]: I0517 01:36:00.991422 3075 policy_none.go:49] "None policy: Start" May 17 01:36:00.991482 kubelet[3075]: E0517 01:36:00.991442 3075 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:36:00.991940 kubelet[3075]: I0517 01:36:00.991928 3075 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 01:36:00.991962 kubelet[3075]: I0517 01:36:00.991950 3075 state_mem.go:35] "Initializing new in-memory state store" May 17 01:36:00.993157 kubelet[3075]: W0517 01:36:00.993118 3075 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://86.109.9.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 86.109.9.158:6443: connect: connection refused May 17 01:36:00.993191 kubelet[3075]: E0517 01:36:00.993171 3075 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://86.109.9.158:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 86.109.9.158:6443: connect: connection refused" logger="UnhandledError" May 17 01:36:00.996530 kubelet[3075]: I0517 01:36:00.996512 3075 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:36:00.996580 kubelet[3075]: I0517 01:36:00.996568 3075 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 01:36:00.996692 kubelet[3075]: I0517 01:36:00.996684 3075 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:36:00.996723 kubelet[3075]: I0517 01:36:00.996696 3075 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:36:00.996844 kubelet[3075]: I0517 01:36:00.996827 3075 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:36:00.998055 kubelet[3075]: E0517 01:36:00.998039 3075 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:00.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:01.095915 kernel: audit: type=1327 audit(1747445760.921:195): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:01.095995 kernel: audit: type=1400 audit(1747445760.921:196): avc: denied { mac_admin } for pid=3075 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:00.921000 audit[3075]: AVC avc: denied { mac_admin } for pid=3075 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:01.098248 kubelet[3075]: I0517 01:36:01.098213 3075 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8226148b53" May 17 01:36:01.098940 kubelet[3075]: E0517 01:36:01.098919 3075 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://86.109.9.158:6443/api/v1/nodes\": dial tcp 86.109.9.158:6443: connect: connection refused" node="ci-3510.3.7-n-8226148b53" May 17 01:36:01.130323 kubelet[3075]: E0517 01:36:01.130296 3075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.9.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8226148b53?timeout=10s\": dial tcp 86.109.9.158:6443: connect: connection refused" interval="400ms" May 17 01:36:00.921000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:01.159024 kernel: audit: type=1401 audit(1747445760.921:196): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:01.159057 kernel: audit: type=1300 audit(1747445760.921:196): arch=c00000b7 syscall=5 success=no exit=-22 a0=40007f4400 a1=4000e90d08 a2=4000c12cf0 a3=25 items=0 ppid=1 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.921000 audit[3075]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40007f4400 a1=4000e90d08 a2=4000c12cf0 a3=25 items=0 ppid=1 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:01.224351 kubelet[3075]: I0517 01:36:01.224328 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224379 kubelet[3075]: I0517 01:36:01.224363 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75a00eb07575fef7fa0e145f7292c7f8-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-8226148b53\" (UID: \"75a00eb07575fef7fa0e145f7292c7f8\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224436 kubelet[3075]: I0517 01:36:01.224382 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/402ea78e406508b77dd7adfb50944dfd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-8226148b53\" (UID: \"402ea78e406508b77dd7adfb50944dfd\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224436 kubelet[3075]: I0517 01:36:01.224415 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224497 kubelet[3075]: I0517 01:36:01.224435 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224497 kubelet[3075]: I0517 01:36:01.224452 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224497 kubelet[3075]: I0517 01:36:01.224468 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224497 kubelet[3075]: I0517 01:36:01.224487 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/402ea78e406508b77dd7adfb50944dfd-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8226148b53\" (UID: \"402ea78e406508b77dd7adfb50944dfd\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:01.224577 kubelet[3075]: I0517 01:36:01.224503 3075 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/402ea78e406508b77dd7adfb50944dfd-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8226148b53\" (UID: \"402ea78e406508b77dd7adfb50944dfd\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:01.267989 kernel: audit: type=1327 audit(1747445760.921:196): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:01.268008 kernel: audit: type=1325 audit(1747445760.926:197): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.926000 audit[3104]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=3104 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.926000 audit[3104]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd65a1610 a2=0 a3=1 items=0 ppid=3075 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:01.300531 kubelet[3075]: I0517 01:36:01.300518 3075 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8226148b53" May 17 01:36:01.300773 kubelet[3075]: E0517 01:36:01.300752 3075 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://86.109.9.158:6443/api/v1/nodes\": dial tcp 86.109.9.158:6443: connect: connection refused" node="ci-3510.3.7-n-8226148b53" May 17 01:36:01.351565 kernel: audit: type=1300 audit(1747445760.926:197): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd65a1610 a2=0 a3=1 items=0 ppid=3075 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.926000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 01:36:00.927000 audit[3105]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=3105 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.927000 audit[3105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1024360 a2=0 a3=1 items=0 ppid=3075 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.927000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 01:36:00.931000 audit[3111]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=3111 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.931000 audit[3111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffe9fdd70 a2=0 a3=1 items=0 ppid=3075 pid=3111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 01:36:00.934000 audit[3113]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=3113 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.934000 audit[3113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffd16c930 a2=0 a3=1 items=0 ppid=3075 pid=3113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.934000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 01:36:00.988000 audit[3116]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=3116 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.988000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffdd77f0b0 a2=0 a3=1 items=0 ppid=3075 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.988000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 May 17 01:36:00.990000 audit[3118]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:00.990000 audit[3118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd9fb1030 a2=0 a3=1 items=0 ppid=3075 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.990000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 May 17 01:36:00.990000 audit[3119]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.990000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc984c5c0 a2=0 a3=1 items=0 ppid=3075 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.990000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 01:36:00.992000 audit[3120]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=3120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:00.992000 audit[3120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee5a7820 a2=0 a3=1 items=0 ppid=3075 pid=3120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.992000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 May 17 01:36:00.992000 audit[3121]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=3121 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.992000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffdb15ce0 a2=0 a3=1 items=0 ppid=3075 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.992000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 01:36:00.993000 audit[3122]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:00.993000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffff11471e0 a2=0 a3=1 items=0 ppid=3075 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.993000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 May 17 01:36:00.993000 audit[3123]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:00.993000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec4bf660 a2=0 a3=1 items=0 ppid=3075 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.993000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 01:36:00.994000 audit[3124]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:00.994000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff71359f0 a2=0 a3=1 items=0 ppid=3075 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.994000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 May 17 01:36:00.995000 audit[3075]: AVC avc: denied { mac_admin } for pid=3075 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:00.995000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:00.995000 audit[3075]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000fb2d50 a1=4000bd3920 a2=4000fb2d20 a3=25 items=0 ppid=1 pid=3075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:00.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:01.398613 env[2382]: time="2025-05-17T01:36:01.398563400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-8226148b53,Uid:402ea78e406508b77dd7adfb50944dfd,Namespace:kube-system,Attempt:0,}" May 17 01:36:01.399973 env[2382]: time="2025-05-17T01:36:01.399943720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-8226148b53,Uid:7b9221b222a2eab39ffc9e914822776d,Namespace:kube-system,Attempt:0,}" May 17 01:36:01.401545 env[2382]: time="2025-05-17T01:36:01.401519600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-8226148b53,Uid:75a00eb07575fef7fa0e145f7292c7f8,Namespace:kube-system,Attempt:0,}" May 17 01:36:01.530749 kubelet[3075]: E0517 01:36:01.530712 3075 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://86.109.9.158:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-8226148b53?timeout=10s\": dial tcp 86.109.9.158:6443: connect: connection refused" interval="800ms" May 17 01:36:01.702694 kubelet[3075]: I0517 01:36:01.702674 3075 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8226148b53" May 17 01:36:01.702961 kubelet[3075]: E0517 01:36:01.702939 3075 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://86.109.9.158:6443/api/v1/nodes\": dial tcp 86.109.9.158:6443: connect: connection refused" node="ci-3510.3.7-n-8226148b53" May 17 01:36:01.877115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2898441753.mount: Deactivated successfully. May 17 01:36:01.877816 env[2382]: time="2025-05-17T01:36:01.877789240Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.878517 env[2382]: time="2025-05-17T01:36:01.878498920Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.879165 env[2382]: time="2025-05-17T01:36:01.879142280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.881880 env[2382]: time="2025-05-17T01:36:01.881862600Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.884554 env[2382]: time="2025-05-17T01:36:01.884530080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.887281 env[2382]: time="2025-05-17T01:36:01.887260640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.887908 env[2382]: time="2025-05-17T01:36:01.887890680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.888542 env[2382]: time="2025-05-17T01:36:01.888526760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.889245 env[2382]: time="2025-05-17T01:36:01.889225680Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.889884 env[2382]: time="2025-05-17T01:36:01.889869000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.890653 env[2382]: time="2025-05-17T01:36:01.890626240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.891480 env[2382]: time="2025-05-17T01:36:01.891454480Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:01.897329 env[2382]: time="2025-05-17T01:36:01.897287880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:01.897354 env[2382]: time="2025-05-17T01:36:01.897326360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:01.897354 env[2382]: time="2025-05-17T01:36:01.897337840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:01.897547 env[2382]: time="2025-05-17T01:36:01.897515560Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:01.897547 env[2382]: time="2025-05-17T01:36:01.897533200Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/76191df5774963bb2337e6765bbc54c998a51061ff09a95b72a6ab4517d7bf25 pid=3148 runtime=io.containerd.runc.v2 May 17 01:36:01.897591 env[2382]: time="2025-05-17T01:36:01.897547240Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:01.897591 env[2382]: time="2025-05-17T01:36:01.897528880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:01.897591 env[2382]: time="2025-05-17T01:36:01.897560440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:01.897591 env[2382]: time="2025-05-17T01:36:01.897557760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:01.897591 env[2382]: time="2025-05-17T01:36:01.897571280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:01.897688 env[2382]: time="2025-05-17T01:36:01.897672080Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e99bffaa2641155827088ceed80b18b8a29e042d5b907c0236d7278c560bf6ac pid=3147 runtime=io.containerd.runc.v2 May 17 01:36:01.897714 env[2382]: time="2025-05-17T01:36:01.897685360Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5f755adfad68a5db74b53e24a77b9b291f83fa1ba17dc211c260c7e89379c3f8 pid=3149 runtime=io.containerd.runc.v2 May 17 01:36:01.924988 kubelet[3075]: W0517 01:36:01.924930 3075 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://86.109.9.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8226148b53&limit=500&resourceVersion=0": dial tcp 86.109.9.158:6443: connect: connection refused May 17 01:36:01.925074 kubelet[3075]: E0517 01:36:01.924997 3075 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://86.109.9.158:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-8226148b53&limit=500&resourceVersion=0\": dial tcp 86.109.9.158:6443: connect: connection refused" logger="UnhandledError" May 17 01:36:01.942207 env[2382]: time="2025-05-17T01:36:01.942173960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-8226148b53,Uid:402ea78e406508b77dd7adfb50944dfd,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f755adfad68a5db74b53e24a77b9b291f83fa1ba17dc211c260c7e89379c3f8\"" May 17 01:36:01.942207 env[2382]: time="2025-05-17T01:36:01.942182280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-8226148b53,Uid:75a00eb07575fef7fa0e145f7292c7f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"76191df5774963bb2337e6765bbc54c998a51061ff09a95b72a6ab4517d7bf25\"" May 17 01:36:01.942718 env[2382]: time="2025-05-17T01:36:01.942692520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-8226148b53,Uid:7b9221b222a2eab39ffc9e914822776d,Namespace:kube-system,Attempt:0,} returns sandbox id \"e99bffaa2641155827088ceed80b18b8a29e042d5b907c0236d7278c560bf6ac\"" May 17 01:36:01.944580 env[2382]: time="2025-05-17T01:36:01.944556080Z" level=info msg="CreateContainer within sandbox \"76191df5774963bb2337e6765bbc54c998a51061ff09a95b72a6ab4517d7bf25\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 01:36:01.944647 env[2382]: time="2025-05-17T01:36:01.944557040Z" level=info msg="CreateContainer within sandbox \"e99bffaa2641155827088ceed80b18b8a29e042d5b907c0236d7278c560bf6ac\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 01:36:01.944725 env[2382]: time="2025-05-17T01:36:01.944556920Z" level=info msg="CreateContainer within sandbox \"5f755adfad68a5db74b53e24a77b9b291f83fa1ba17dc211c260c7e89379c3f8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 01:36:01.950363 env[2382]: time="2025-05-17T01:36:01.950335520Z" level=info msg="CreateContainer within sandbox \"76191df5774963bb2337e6765bbc54c998a51061ff09a95b72a6ab4517d7bf25\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a10b1a21aaf442f13116885d308c10ede4c7835eca2957f53f6227075d5ff1a7\"" May 17 01:36:01.950740 env[2382]: time="2025-05-17T01:36:01.950718720Z" level=info msg="StartContainer for \"a10b1a21aaf442f13116885d308c10ede4c7835eca2957f53f6227075d5ff1a7\"" May 17 01:36:01.950954 env[2382]: time="2025-05-17T01:36:01.950925600Z" level=info msg="CreateContainer within sandbox \"e99bffaa2641155827088ceed80b18b8a29e042d5b907c0236d7278c560bf6ac\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b8dd7fbcda74e382a80c19dcb97d461b2701c9ef40e3049a0afb4eb49e6bb16e\"" May 17 01:36:01.951104 env[2382]: time="2025-05-17T01:36:01.951073240Z" level=info msg="CreateContainer within sandbox \"5f755adfad68a5db74b53e24a77b9b291f83fa1ba17dc211c260c7e89379c3f8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"bb3c12dbf12ef4c4a40e0c262a20ca35a1a4fc3761fc4b61898c144754fcd4e3\"" May 17 01:36:01.951248 env[2382]: time="2025-05-17T01:36:01.951221920Z" level=info msg="StartContainer for \"b8dd7fbcda74e382a80c19dcb97d461b2701c9ef40e3049a0afb4eb49e6bb16e\"" May 17 01:36:01.951405 env[2382]: time="2025-05-17T01:36:01.951378360Z" level=info msg="StartContainer for \"bb3c12dbf12ef4c4a40e0c262a20ca35a1a4fc3761fc4b61898c144754fcd4e3\"" May 17 01:36:02.004457 env[2382]: time="2025-05-17T01:36:02.004377880Z" level=info msg="StartContainer for \"a10b1a21aaf442f13116885d308c10ede4c7835eca2957f53f6227075d5ff1a7\" returns successfully" May 17 01:36:02.004529 env[2382]: time="2025-05-17T01:36:02.004450960Z" level=info msg="StartContainer for \"b8dd7fbcda74e382a80c19dcb97d461b2701c9ef40e3049a0afb4eb49e6bb16e\" returns successfully" May 17 01:36:02.004529 env[2382]: time="2025-05-17T01:36:02.004484680Z" level=info msg="StartContainer for \"bb3c12dbf12ef4c4a40e0c262a20ca35a1a4fc3761fc4b61898c144754fcd4e3\" returns successfully" May 17 01:36:02.505056 kubelet[3075]: I0517 01:36:02.505033 3075 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8226148b53" May 17 01:36:03.234014 kubelet[3075]: E0517 01:36:03.233970 3075 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-8226148b53\" not found" node="ci-3510.3.7-n-8226148b53" May 17 01:36:03.337734 kubelet[3075]: I0517 01:36:03.337707 3075 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-8226148b53" May 17 01:36:03.337769 kubelet[3075]: E0517 01:36:03.337740 3075 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-8226148b53\": node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:03.345045 kubelet[3075]: E0517 01:36:03.345022 3075 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:03.445727 kubelet[3075]: E0517 01:36:03.445698 3075 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:03.546222 kubelet[3075]: E0517 01:36:03.546155 3075 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:03.646480 kubelet[3075]: E0517 01:36:03.646459 3075 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:03.746947 kubelet[3075]: E0517 01:36:03.746929 3075 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:03.890469 kubelet[3075]: I0517 01:36:03.890447 3075 apiserver.go:52] "Watching apiserver" May 17 01:36:03.922909 kubelet[3075]: I0517 01:36:03.922888 3075 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 01:36:04.009422 kubelet[3075]: E0517 01:36:04.009246 3075 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-8226148b53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:04.009422 kubelet[3075]: E0517 01:36:04.009259 3075 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:04.009422 kubelet[3075]: E0517 01:36:04.009250 3075 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-n-8226148b53\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8226148b53" May 17 01:36:05.008422 kubelet[3075]: W0517 01:36:05.008391 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:36:05.429110 systemd[1]: Reloading. May 17 01:36:05.461946 /usr/lib/systemd/system-generators/torcx-generator[3507]: time="2025-05-17T01:36:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 01:36:05.462263 /usr/lib/systemd/system-generators/torcx-generator[3507]: time="2025-05-17T01:36:05Z" level=info msg="torcx already run" May 17 01:36:05.511866 kubelet[3075]: W0517 01:36:05.511837 3075 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:36:05.540994 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 01:36:05.541129 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 01:36:05.557483 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 01:36:05.631777 systemd[1]: Stopping kubelet.service... May 17 01:36:05.648107 systemd[1]: kubelet.service: Deactivated successfully. May 17 01:36:05.648380 systemd[1]: Stopped kubelet.service. May 17 01:36:05.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:36:05.650084 systemd[1]: Starting kubelet.service... May 17 01:36:05.742261 systemd[1]: Started kubelet.service. May 17 01:36:05.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:36:05.774213 kubelet[3578]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:36:05.774213 kubelet[3578]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 01:36:05.774213 kubelet[3578]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 01:36:05.774510 kubelet[3578]: I0517 01:36:05.774264 3578 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 01:36:05.779209 kubelet[3578]: I0517 01:36:05.779188 3578 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 01:36:05.779234 kubelet[3578]: I0517 01:36:05.779211 3578 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 01:36:05.779408 kubelet[3578]: I0517 01:36:05.779399 3578 server.go:934] "Client rotation is on, will bootstrap in background" May 17 01:36:05.780602 kubelet[3578]: I0517 01:36:05.780590 3578 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 01:36:05.783590 kubelet[3578]: I0517 01:36:05.783568 3578 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 01:36:05.785941 kubelet[3578]: E0517 01:36:05.785921 3578 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 01:36:05.785964 kubelet[3578]: I0517 01:36:05.785945 3578 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 01:36:05.803758 kubelet[3578]: I0517 01:36:05.803730 3578 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 01:36:05.804112 kubelet[3578]: I0517 01:36:05.804101 3578 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 01:36:05.804232 kubelet[3578]: I0517 01:36:05.804209 3578 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 01:36:05.804413 kubelet[3578]: I0517 01:36:05.804237 3578 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-8226148b53","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 01:36:05.804481 kubelet[3578]: I0517 01:36:05.804421 3578 topology_manager.go:138] "Creating topology manager with none policy" May 17 01:36:05.804481 kubelet[3578]: I0517 01:36:05.804430 3578 container_manager_linux.go:300] "Creating device plugin manager" May 17 01:36:05.804481 kubelet[3578]: I0517 01:36:05.804461 3578 state_mem.go:36] "Initialized new in-memory state store" May 17 01:36:05.804554 kubelet[3578]: I0517 01:36:05.804547 3578 kubelet.go:408] "Attempting to sync node with API server" May 17 01:36:05.804577 kubelet[3578]: I0517 01:36:05.804561 3578 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 01:36:05.804599 kubelet[3578]: I0517 01:36:05.804579 3578 kubelet.go:314] "Adding apiserver pod source" May 17 01:36:05.804599 kubelet[3578]: I0517 01:36:05.804592 3578 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 01:36:05.805078 kubelet[3578]: I0517 01:36:05.805056 3578 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 01:36:05.805546 kubelet[3578]: I0517 01:36:05.805536 3578 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 01:36:05.805958 kubelet[3578]: I0517 01:36:05.805947 3578 server.go:1274] "Started kubelet" May 17 01:36:05.806025 kubelet[3578]: I0517 01:36:05.805977 3578 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 01:36:05.806056 kubelet[3578]: I0517 01:36:05.806022 3578 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 01:36:05.806230 kubelet[3578]: I0517 01:36:05.806220 3578 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 01:36:05.805000 audit[3578]: AVC avc: denied { mac_admin } for pid=3578 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:05.805000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:05.805000 audit[3578]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40007013b0 a1=4000b0a720 a2=4000701380 a3=25 items=0 ppid=1 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:05.805000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:05.806000 audit[3578]: AVC avc: denied { mac_admin } for pid=3578 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:05.806000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:05.806000 audit[3578]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b2e620 a1=4000b0a738 a2=4000701440 a3=25 items=0 ppid=1 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:05.806000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:05.807094 kubelet[3578]: I0517 01:36:05.806878 3578 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" May 17 01:36:05.807094 kubelet[3578]: I0517 01:36:05.806921 3578 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" May 17 01:36:05.807094 kubelet[3578]: I0517 01:36:05.806945 3578 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 01:36:05.807094 kubelet[3578]: I0517 01:36:05.806956 3578 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 01:36:05.807094 kubelet[3578]: I0517 01:36:05.806987 3578 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 01:36:05.807094 kubelet[3578]: E0517 01:36:05.807001 3578 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-8226148b53\" not found" May 17 01:36:05.807094 kubelet[3578]: I0517 01:36:05.807011 3578 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 01:36:05.808298 kubelet[3578]: I0517 01:36:05.807999 3578 reconciler.go:26] "Reconciler: start to sync state" May 17 01:36:05.808515 kubelet[3578]: I0517 01:36:05.808498 3578 factory.go:221] Registration of the systemd container factory successfully May 17 01:36:05.809161 kubelet[3578]: I0517 01:36:05.809136 3578 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 01:36:05.809209 kubelet[3578]: E0517 01:36:05.809192 3578 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 01:36:05.809886 kubelet[3578]: I0517 01:36:05.809870 3578 server.go:449] "Adding debug handlers to kubelet server" May 17 01:36:05.809989 kubelet[3578]: I0517 01:36:05.809974 3578 factory.go:221] Registration of the containerd container factory successfully May 17 01:36:05.814096 kubelet[3578]: I0517 01:36:05.814071 3578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 01:36:05.814993 kubelet[3578]: I0517 01:36:05.814983 3578 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 01:36:05.815030 kubelet[3578]: I0517 01:36:05.814998 3578 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 01:36:05.815030 kubelet[3578]: I0517 01:36:05.815015 3578 kubelet.go:2321] "Starting kubelet main sync loop" May 17 01:36:05.815084 kubelet[3578]: E0517 01:36:05.815057 3578 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 01:36:05.848903 kubelet[3578]: I0517 01:36:05.848866 3578 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 01:36:05.848903 kubelet[3578]: I0517 01:36:05.848886 3578 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 01:36:05.848903 kubelet[3578]: I0517 01:36:05.848906 3578 state_mem.go:36] "Initialized new in-memory state store" May 17 01:36:05.849144 kubelet[3578]: I0517 01:36:05.849050 3578 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 01:36:05.849144 kubelet[3578]: I0517 01:36:05.849061 3578 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 01:36:05.849144 kubelet[3578]: I0517 01:36:05.849082 3578 policy_none.go:49] "None policy: Start" May 17 01:36:05.849498 kubelet[3578]: I0517 01:36:05.849481 3578 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 01:36:05.849528 kubelet[3578]: I0517 01:36:05.849510 3578 state_mem.go:35] "Initializing new in-memory state store" May 17 01:36:05.849670 kubelet[3578]: I0517 01:36:05.849662 3578 state_mem.go:75] "Updated machine memory state" May 17 01:36:05.850760 kubelet[3578]: I0517 01:36:05.850744 3578 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 01:36:05.849000 audit[3578]: AVC avc: denied { mac_admin } for pid=3578 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:05.849000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" May 17 01:36:05.849000 audit[3578]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001340930 a1=400182d8c0 a2=4001340900 a3=25 items=0 ppid=1 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:05.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 May 17 01:36:05.850956 kubelet[3578]: I0517 01:36:05.850804 3578 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" May 17 01:36:05.850956 kubelet[3578]: I0517 01:36:05.850947 3578 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 01:36:05.851005 kubelet[3578]: I0517 01:36:05.850958 3578 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 01:36:05.851132 kubelet[3578]: I0517 01:36:05.851116 3578 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 01:36:05.918996 kubelet[3578]: W0517 01:36:05.918971 3578 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:36:05.918996 kubelet[3578]: W0517 01:36:05.918989 3578 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:36:05.919108 kubelet[3578]: E0517 01:36:05.919041 3578 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.7-n-8226148b53\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8226148b53" May 17 01:36:05.919310 kubelet[3578]: W0517 01:36:05.919296 3578 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:36:05.919351 kubelet[3578]: E0517 01:36:05.919339 3578 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-8226148b53\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:05.953773 kubelet[3578]: I0517 01:36:05.953756 3578 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-8226148b53" May 17 01:36:05.958271 kubelet[3578]: I0517 01:36:05.958249 3578 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-8226148b53" May 17 01:36:05.958338 kubelet[3578]: I0517 01:36:05.958316 3578 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-8226148b53" May 17 01:36:06.008624 kubelet[3578]: I0517 01:36:06.008540 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008624 kubelet[3578]: I0517 01:36:06.008572 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008624 kubelet[3578]: I0517 01:36:06.008593 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008783 kubelet[3578]: I0517 01:36:06.008655 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008783 kubelet[3578]: I0517 01:36:06.008710 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75a00eb07575fef7fa0e145f7292c7f8-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-8226148b53\" (UID: \"75a00eb07575fef7fa0e145f7292c7f8\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008783 kubelet[3578]: I0517 01:36:06.008741 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/402ea78e406508b77dd7adfb50944dfd-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8226148b53\" (UID: \"402ea78e406508b77dd7adfb50944dfd\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008783 kubelet[3578]: I0517 01:36:06.008768 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/402ea78e406508b77dd7adfb50944dfd-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-8226148b53\" (UID: \"402ea78e406508b77dd7adfb50944dfd\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008875 kubelet[3578]: I0517 01:36:06.008799 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/402ea78e406508b77dd7adfb50944dfd-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-8226148b53\" (UID: \"402ea78e406508b77dd7adfb50944dfd\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:06.008875 kubelet[3578]: I0517 01:36:06.008817 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7b9221b222a2eab39ffc9e914822776d-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-8226148b53\" (UID: \"7b9221b222a2eab39ffc9e914822776d\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" May 17 01:36:06.805754 kubelet[3578]: I0517 01:36:06.805712 3578 apiserver.go:52] "Watching apiserver" May 17 01:36:06.825892 kubelet[3578]: W0517 01:36:06.825865 3578 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 01:36:06.825983 kubelet[3578]: E0517 01:36:06.825928 3578 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-8226148b53\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" May 17 01:36:06.836139 kubelet[3578]: I0517 01:36:06.836099 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-8226148b53" podStartSLOduration=1.8360872000000001 podStartE2EDuration="1.8360872s" podCreationTimestamp="2025-05-17 01:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:36:06.83595128 +0000 UTC m=+1.090315601" watchObservedRunningTime="2025-05-17 01:36:06.8360872 +0000 UTC m=+1.090451481" May 17 01:36:06.841433 kubelet[3578]: I0517 01:36:06.841396 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-8226148b53" podStartSLOduration=1.8413836799999999 podStartE2EDuration="1.84138368s" podCreationTimestamp="2025-05-17 01:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:36:06.8412854 +0000 UTC m=+1.095649721" watchObservedRunningTime="2025-05-17 01:36:06.84138368 +0000 UTC m=+1.095748001" May 17 01:36:06.853746 kubelet[3578]: I0517 01:36:06.853706 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-8226148b53" podStartSLOduration=1.8536915600000001 podStartE2EDuration="1.85369156s" podCreationTimestamp="2025-05-17 01:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:36:06.84796036 +0000 UTC m=+1.102324681" watchObservedRunningTime="2025-05-17 01:36:06.85369156 +0000 UTC m=+1.108055881" May 17 01:36:06.907594 kubelet[3578]: I0517 01:36:06.907557 3578 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 01:36:11.791311 kubelet[3578]: I0517 01:36:11.791273 3578 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 01:36:11.791696 env[2382]: time="2025-05-17T01:36:11.791588320Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 01:36:11.791878 kubelet[3578]: I0517 01:36:11.791744 3578 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 01:36:12.655265 kubelet[3578]: I0517 01:36:12.655235 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb7fba75-04ff-464d-b078-c8751450bd5f-xtables-lock\") pod \"kube-proxy-q5c2b\" (UID: \"fb7fba75-04ff-464d-b078-c8751450bd5f\") " pod="kube-system/kube-proxy-q5c2b" May 17 01:36:12.655265 kubelet[3578]: I0517 01:36:12.655268 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb7fba75-04ff-464d-b078-c8751450bd5f-lib-modules\") pod \"kube-proxy-q5c2b\" (UID: \"fb7fba75-04ff-464d-b078-c8751450bd5f\") " pod="kube-system/kube-proxy-q5c2b" May 17 01:36:12.655404 kubelet[3578]: I0517 01:36:12.655289 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fb7fba75-04ff-464d-b078-c8751450bd5f-kube-proxy\") pod \"kube-proxy-q5c2b\" (UID: \"fb7fba75-04ff-464d-b078-c8751450bd5f\") " pod="kube-system/kube-proxy-q5c2b" May 17 01:36:12.655404 kubelet[3578]: I0517 01:36:12.655306 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrc54\" (UniqueName: \"kubernetes.io/projected/fb7fba75-04ff-464d-b078-c8751450bd5f-kube-api-access-nrc54\") pod \"kube-proxy-q5c2b\" (UID: \"fb7fba75-04ff-464d-b078-c8751450bd5f\") " pod="kube-system/kube-proxy-q5c2b" May 17 01:36:12.762498 kubelet[3578]: I0517 01:36:12.762468 3578 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 01:36:12.901128 env[2382]: time="2025-05-17T01:36:12.901097360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5c2b,Uid:fb7fba75-04ff-464d-b078-c8751450bd5f,Namespace:kube-system,Attempt:0,}" May 17 01:36:12.908482 env[2382]: time="2025-05-17T01:36:12.908398280Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:12.908482 env[2382]: time="2025-05-17T01:36:12.908436880Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:12.908482 env[2382]: time="2025-05-17T01:36:12.908447800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:12.908617 env[2382]: time="2025-05-17T01:36:12.908589720Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/99ef87af35f2ed1ec73fd5322aef2bb8414bcc544e705531920eb728f3e3dc51 pid=3757 runtime=io.containerd.runc.v2 May 17 01:36:12.937897 env[2382]: time="2025-05-17T01:36:12.937861520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q5c2b,Uid:fb7fba75-04ff-464d-b078-c8751450bd5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"99ef87af35f2ed1ec73fd5322aef2bb8414bcc544e705531920eb728f3e3dc51\"" May 17 01:36:12.939799 env[2382]: time="2025-05-17T01:36:12.939774440Z" level=info msg="CreateContainer within sandbox \"99ef87af35f2ed1ec73fd5322aef2bb8414bcc544e705531920eb728f3e3dc51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 01:36:12.946921 env[2382]: time="2025-05-17T01:36:12.946891800Z" level=info msg="CreateContainer within sandbox \"99ef87af35f2ed1ec73fd5322aef2bb8414bcc544e705531920eb728f3e3dc51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d432334b20a871573984ddbc426d36007d390001180377a79c9857c5d8104c6d\"" May 17 01:36:12.947352 env[2382]: time="2025-05-17T01:36:12.947328480Z" level=info msg="StartContainer for \"d432334b20a871573984ddbc426d36007d390001180377a79c9857c5d8104c6d\"" May 17 01:36:12.957199 kubelet[3578]: I0517 01:36:12.957171 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dbc962af-7c7a-460f-917b-8bc4f476d7ae-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-pzfdb\" (UID: \"dbc962af-7c7a-460f-917b-8bc4f476d7ae\") " pod="tigera-operator/tigera-operator-7c5755cdcb-pzfdb" May 17 01:36:12.957409 kubelet[3578]: I0517 01:36:12.957221 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2d4\" (UniqueName: \"kubernetes.io/projected/dbc962af-7c7a-460f-917b-8bc4f476d7ae-kube-api-access-4n2d4\") pod \"tigera-operator-7c5755cdcb-pzfdb\" (UID: \"dbc962af-7c7a-460f-917b-8bc4f476d7ae\") " pod="tigera-operator/tigera-operator-7c5755cdcb-pzfdb" May 17 01:36:12.987439 env[2382]: time="2025-05-17T01:36:12.987409520Z" level=info msg="StartContainer for \"d432334b20a871573984ddbc426d36007d390001180377a79c9857c5d8104c6d\" returns successfully" May 17 01:36:13.128000 audit[3878]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.140780 kernel: kauditd_printk_skb: 52 callbacks suppressed May 17 01:36:13.140820 kernel: audit: type=1325 audit(1747445773.128:215): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3878 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.128000 audit[3878]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe713fa90 a2=0 a3=1 items=0 ppid=3808 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.208684 env[2382]: time="2025-05-17T01:36:13.208650640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-pzfdb,Uid:dbc962af-7c7a-460f-917b-8bc4f476d7ae,Namespace:tigera-operator,Attempt:0,}" May 17 01:36:13.223628 kernel: audit: type=1300 audit(1747445773.128:215): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe713fa90 a2=0 a3=1 items=0 ppid=3808 pid=3878 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.223686 kernel: audit: type=1327 audit(1747445773.128:215): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 01:36:13.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 01:36:13.128000 audit[3879]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.279490 kernel: audit: type=1325 audit(1747445773.128:216): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.128000 audit[3879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe54cc040 a2=0 a3=1 items=0 ppid=3808 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.335080 kernel: audit: type=1300 audit(1747445773.128:216): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe54cc040 a2=0 a3=1 items=0 ppid=3808 pid=3879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 01:36:13.362939 kernel: audit: type=1327 audit(1747445773.128:216): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 May 17 01:36:13.129000 audit[3882]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.371034 env[2382]: time="2025-05-17T01:36:13.370972680Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:13.371173 env[2382]: time="2025-05-17T01:36:13.371012040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:13.371173 env[2382]: time="2025-05-17T01:36:13.371022400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:13.371325 env[2382]: time="2025-05-17T01:36:13.371172600Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e851e175dd955e6dd125037ff13ce992c260d23e70395df6ac5241d22b95492e pid=3915 runtime=io.containerd.runc.v2 May 17 01:36:13.390572 kernel: audit: type=1325 audit(1747445773.129:217): table=nat:40 family=2 entries=1 op=nft_register_chain pid=3882 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.390651 kernel: audit: type=1300 audit(1747445773.129:217): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca33cd50 a2=0 a3=1 items=0 ppid=3808 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.129000 audit[3882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca33cd50 a2=0 a3=1 items=0 ppid=3808 pid=3882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 01:36:13.473431 kernel: audit: type=1327 audit(1747445773.129:217): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 01:36:13.130000 audit[3883]: NETFILTER_CFG table=nat:41 family=10 entries=1 op=nft_register_chain pid=3883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.500927 kernel: audit: type=1325 audit(1747445773.130:218): table=nat:41 family=10 entries=1 op=nft_register_chain pid=3883 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.130000 audit[3883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5b798d0 a2=0 a3=1 items=0 ppid=3808 pid=3883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 May 17 01:36:13.131000 audit[3884]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_chain pid=3884 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.131000 audit[3884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecdd71d0 a2=0 a3=1 items=0 ppid=3808 pid=3884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.131000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 01:36:13.132000 audit[3885]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=3885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.132000 audit[3885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5a11f10 a2=0 a3=1 items=0 ppid=3808 pid=3885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.132000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 May 17 01:36:13.233000 audit[3887]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3887 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.233000 audit[3887]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd7e37d40 a2=0 a3=1 items=0 ppid=3808 pid=3887 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.233000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 01:36:13.236000 audit[3889]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3889 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.236000 audit[3889]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd8db93a0 a2=0 a3=1 items=0 ppid=3808 pid=3889 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 May 17 01:36:13.239000 audit[3892]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3892 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.239000 audit[3892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffe1b037a0 a2=0 a3=1 items=0 ppid=3808 pid=3892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.239000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 May 17 01:36:13.240000 audit[3893]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3893 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.240000 audit[3893]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd092eab0 a2=0 a3=1 items=0 ppid=3808 pid=3893 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.240000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 01:36:13.242000 audit[3895]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3895 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.242000 audit[3895]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe5a6d440 a2=0 a3=1 items=0 ppid=3808 pid=3895 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 01:36:13.243000 audit[3896]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3896 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.243000 audit[3896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffff0ff3e0 a2=0 a3=1 items=0 ppid=3808 pid=3896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.243000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 01:36:13.246000 audit[3898]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3898 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.246000 audit[3898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc19f3870 a2=0 a3=1 items=0 ppid=3808 pid=3898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.246000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 01:36:13.249000 audit[3901]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3901 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.249000 audit[3901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff2bb6040 a2=0 a3=1 items=0 ppid=3808 pid=3901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.249000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 May 17 01:36:13.250000 audit[3902]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3902 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.250000 audit[3902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd098d620 a2=0 a3=1 items=0 ppid=3808 pid=3902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 01:36:13.253000 audit[3904]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3904 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.253000 audit[3904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff2d4f8f0 a2=0 a3=1 items=0 ppid=3808 pid=3904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.253000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 01:36:13.254000 audit[3905]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3905 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.254000 audit[3905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff67acfe0 a2=0 a3=1 items=0 ppid=3808 pid=3905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.254000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 01:36:13.257000 audit[3907]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3907 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.257000 audit[3907]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7a2fc40 a2=0 a3=1 items=0 ppid=3808 pid=3907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 01:36:13.503000 audit[3945]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3945 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.503000 audit[3945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff15c77b0 a2=0 a3=1 items=0 ppid=3808 pid=3945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.503000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 01:36:13.506000 audit[3954]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3954 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.506000 audit[3954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9c8c0d0 a2=0 a3=1 items=0 ppid=3808 pid=3954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 01:36:13.507000 audit[3955]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3955 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.507000 audit[3955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc1025350 a2=0 a3=1 items=0 ppid=3808 pid=3955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.507000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 01:36:13.508649 env[2382]: time="2025-05-17T01:36:13.508620080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-pzfdb,Uid:dbc962af-7c7a-460f-917b-8bc4f476d7ae,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e851e175dd955e6dd125037ff13ce992c260d23e70395df6ac5241d22b95492e\"" May 17 01:36:13.509687 env[2382]: time="2025-05-17T01:36:13.509667000Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 17 01:36:13.509000 audit[3957]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3957 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.509000 audit[3957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcca2b2b0 a2=0 a3=1 items=0 ppid=3808 pid=3957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 01:36:13.512000 audit[3960]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3960 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.512000 audit[3960]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff54df330 a2=0 a3=1 items=0 ppid=3808 pid=3960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 01:36:13.513000 audit[3961]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3961 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.513000 audit[3961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0e34ac0 a2=0 a3=1 items=0 ppid=3808 pid=3961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 01:36:13.515000 audit[3963]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3963 subj=system_u:system_r:kernel_t:s0 comm="iptables" May 17 01:36:13.515000 audit[3963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe2676000 a2=0 a3=1 items=0 ppid=3808 pid=3963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 01:36:13.535000 audit[3969]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:13.535000 audit[3969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffebc2caa0 a2=0 a3=1 items=0 ppid=3808 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.535000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:13.550000 audit[3969]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3969 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:13.550000 audit[3969]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffebc2caa0 a2=0 a3=1 items=0 ppid=3808 pid=3969 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:13.551000 audit[3974]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3974 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.551000 audit[3974]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff07d2900 a2=0 a3=1 items=0 ppid=3808 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 May 17 01:36:13.553000 audit[3976]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3976 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.553000 audit[3976]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffffaf3c30 a2=0 a3=1 items=0 ppid=3808 pid=3976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.553000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 May 17 01:36:13.556000 audit[3979]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3979 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.556000 audit[3979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffe50ff230 a2=0 a3=1 items=0 ppid=3808 pid=3979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 May 17 01:36:13.557000 audit[3980]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3980 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.557000 audit[3980]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdecacd40 a2=0 a3=1 items=0 ppid=3808 pid=3980 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.557000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 May 17 01:36:13.559000 audit[3982]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3982 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.559000 audit[3982]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffedc76250 a2=0 a3=1 items=0 ppid=3808 pid=3982 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 May 17 01:36:13.560000 audit[3983]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3983 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.560000 audit[3983]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff440fe60 a2=0 a3=1 items=0 ppid=3808 pid=3983 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 May 17 01:36:13.562000 audit[3985]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3985 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.562000 audit[3985]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd6a03010 a2=0 a3=1 items=0 ppid=3808 pid=3985 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.562000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 May 17 01:36:13.565000 audit[3988]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.565000 audit[3988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe26b9470 a2=0 a3=1 items=0 ppid=3808 pid=3988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.565000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D May 17 01:36:13.566000 audit[3989]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3989 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.566000 audit[3989]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd21f4af0 a2=0 a3=1 items=0 ppid=3808 pid=3989 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 May 17 01:36:13.568000 audit[3991]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3991 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.568000 audit[3991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc134d980 a2=0 a3=1 items=0 ppid=3808 pid=3991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 May 17 01:36:13.569000 audit[3992]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.569000 audit[3992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff32d8790 a2=0 a3=1 items=0 ppid=3808 pid=3992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 May 17 01:36:13.571000 audit[3994]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.571000 audit[3994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd4c95550 a2=0 a3=1 items=0 ppid=3808 pid=3994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A May 17 01:36:13.574000 audit[3997]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3997 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.574000 audit[3997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffefbff370 a2=0 a3=1 items=0 ppid=3808 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.574000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D May 17 01:36:13.577000 audit[4000]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=4000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.577000 audit[4000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9249fa0 a2=0 a3=1 items=0 ppid=3808 pid=4000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C May 17 01:36:13.578000 audit[4001]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=4001 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.578000 audit[4001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff6fe5990 a2=0 a3=1 items=0 ppid=3808 pid=4001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 May 17 01:36:13.580000 audit[4003]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=4003 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.580000 audit[4003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff19b8540 a2=0 a3=1 items=0 ppid=3808 pid=4003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 01:36:13.583000 audit[4006]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=4006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.583000 audit[4006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffffd9ab830 a2=0 a3=1 items=0 ppid=3808 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 May 17 01:36:13.584000 audit[4007]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=4007 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.584000 audit[4007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc2645e0 a2=0 a3=1 items=0 ppid=3808 pid=4007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 May 17 01:36:13.586000 audit[4009]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=4009 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.586000 audit[4009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd6011e00 a2=0 a3=1 items=0 ppid=3808 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.586000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 May 17 01:36:13.587000 audit[4010]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=4010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.587000 audit[4010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9fcd5f0 a2=0 a3=1 items=0 ppid=3808 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.587000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 May 17 01:36:13.589000 audit[4012]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=4012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.589000 audit[4012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffea0248c0 a2=0 a3=1 items=0 ppid=3808 pid=4012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 01:36:13.592000 audit[4015]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=4015 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" May 17 01:36:13.592000 audit[4015]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc7d19f90 a2=0 a3=1 items=0 ppid=3808 pid=4015 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C May 17 01:36:13.594000 audit[4017]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=4017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 01:36:13.594000 audit[4017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe58f15a0 a2=0 a3=1 items=0 ppid=3808 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.594000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:13.594000 audit[4017]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=4017 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" May 17 01:36:13.594000 audit[4017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe58f15a0 a2=0 a3=1 items=0 ppid=3808 pid=4017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:13.594000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:13.996164 kubelet[3578]: I0517 01:36:13.996111 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q5c2b" podStartSLOduration=1.99609492 podStartE2EDuration="1.99609492s" podCreationTimestamp="2025-05-17 01:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:36:13.8379712 +0000 UTC m=+8.092335521" watchObservedRunningTime="2025-05-17 01:36:13.99609492 +0000 UTC m=+8.250459241" May 17 01:36:14.732257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2469711250.mount: Deactivated successfully. May 17 01:36:15.268129 env[2382]: time="2025-05-17T01:36:15.268091800Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:15.268890 env[2382]: time="2025-05-17T01:36:15.268866920Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:15.270037 env[2382]: time="2025-05-17T01:36:15.270019000Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:15.271210 env[2382]: time="2025-05-17T01:36:15.271192600Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:15.271705 env[2382]: time="2025-05-17T01:36:15.271681960Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 17 01:36:15.273449 env[2382]: time="2025-05-17T01:36:15.273426360Z" level=info msg="CreateContainer within sandbox \"e851e175dd955e6dd125037ff13ce992c260d23e70395df6ac5241d22b95492e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 17 01:36:15.277187 env[2382]: time="2025-05-17T01:36:15.277163080Z" level=info msg="CreateContainer within sandbox \"e851e175dd955e6dd125037ff13ce992c260d23e70395df6ac5241d22b95492e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e764a61dc3ed6f7ed540bb80efdb8910bf135cd040d27449938106686e761442\"" May 17 01:36:15.277457 env[2382]: time="2025-05-17T01:36:15.277432800Z" level=info msg="StartContainer for \"e764a61dc3ed6f7ed540bb80efdb8910bf135cd040d27449938106686e761442\"" May 17 01:36:15.313012 env[2382]: time="2025-05-17T01:36:15.312978280Z" level=info msg="StartContainer for \"e764a61dc3ed6f7ed540bb80efdb8910bf135cd040d27449938106686e761442\" returns successfully" May 17 01:36:15.664306 update_engine[2372]: I0517 01:36:15.663913 2372 update_attempter.cc:509] Updating boot flags... May 17 01:36:15.847957 kubelet[3578]: I0517 01:36:15.847910 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-pzfdb" podStartSLOduration=2.08481244 podStartE2EDuration="3.84789284s" podCreationTimestamp="2025-05-17 01:36:12 +0000 UTC" firstStartedPulling="2025-05-17 01:36:13.5093296 +0000 UTC m=+7.763693921" lastFinishedPulling="2025-05-17 01:36:15.27240992 +0000 UTC m=+9.526774321" observedRunningTime="2025-05-17 01:36:15.8477118 +0000 UTC m=+10.102076121" watchObservedRunningTime="2025-05-17 01:36:15.84789284 +0000 UTC m=+10.102257161" May 17 01:36:17.024987 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e764a61dc3ed6f7ed540bb80efdb8910bf135cd040d27449938106686e761442-rootfs.mount: Deactivated successfully. May 17 01:36:17.532826 env[2382]: time="2025-05-17T01:36:17.532767538Z" level=info msg="shim disconnected" id=e764a61dc3ed6f7ed540bb80efdb8910bf135cd040d27449938106686e761442 May 17 01:36:17.533257 env[2382]: time="2025-05-17T01:36:17.533235653Z" level=warning msg="cleaning up after shim disconnected" id=e764a61dc3ed6f7ed540bb80efdb8910bf135cd040d27449938106686e761442 namespace=k8s.io May 17 01:36:17.533318 env[2382]: time="2025-05-17T01:36:17.533305052Z" level=info msg="cleaning up dead shim" May 17 01:36:17.538888 env[2382]: time="2025-05-17T01:36:17.538851913Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4152 runtime=io.containerd.runc.v2\n" May 17 01:36:17.839020 kubelet[3578]: I0517 01:36:17.838935 3578 scope.go:117] "RemoveContainer" containerID="e764a61dc3ed6f7ed540bb80efdb8910bf135cd040d27449938106686e761442" May 17 01:36:17.840537 env[2382]: time="2025-05-17T01:36:17.840500253Z" level=info msg="CreateContainer within sandbox \"e851e175dd955e6dd125037ff13ce992c260d23e70395df6ac5241d22b95492e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" May 17 01:36:17.844311 env[2382]: time="2025-05-17T01:36:17.844279573Z" level=info msg="CreateContainer within sandbox \"e851e175dd955e6dd125037ff13ce992c260d23e70395df6ac5241d22b95492e\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"05406b0fc07bf44e8a4a0758ff99a07307cd07076cfa342a724ae80f78c50e64\"" May 17 01:36:17.844692 env[2382]: time="2025-05-17T01:36:17.844663289Z" level=info msg="StartContainer for \"05406b0fc07bf44e8a4a0758ff99a07307cd07076cfa342a724ae80f78c50e64\"" May 17 01:36:17.846783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount111483440.mount: Deactivated successfully. May 17 01:36:17.894932 env[2382]: time="2025-05-17T01:36:17.894891793Z" level=info msg="StartContainer for \"05406b0fc07bf44e8a4a0758ff99a07307cd07076cfa342a724ae80f78c50e64\" returns successfully" May 17 01:36:20.283647 sudo[2596]: pam_unix(sudo:session): session closed for user root May 17 01:36:20.283000 audit[2596]: USER_END pid=2596 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:36:20.295480 kernel: kauditd_printk_skb: 143 callbacks suppressed May 17 01:36:20.295648 kernel: audit: type=1106 audit(1747445780.283:266): pid=2596 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:36:20.324225 sshd[2592]: pam_unix(sshd:session): session closed for user core May 17 01:36:20.326640 systemd-logind[2371]: Session 9 logged out. Waiting for processes to exit. May 17 01:36:20.326870 systemd[1]: sshd@6-86.109.9.158:22-147.75.109.163:55778.service: Deactivated successfully. May 17 01:36:20.327609 systemd[1]: session-9.scope: Deactivated successfully. May 17 01:36:20.327972 systemd-logind[2371]: Removed session 9. May 17 01:36:20.283000 audit[2596]: CRED_DISP pid=2596 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:36:20.378930 kernel: audit: type=1104 audit(1747445780.283:267): pid=2596 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' May 17 01:36:20.324000 audit[2592]: USER_END pid=2592 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:36:20.433970 kernel: audit: type=1106 audit(1747445780.324:268): pid=2592 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:36:20.324000 audit[2592]: CRED_DISP pid=2592 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:36:20.477212 kernel: audit: type=1104 audit(1747445780.324:269): pid=2592 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:36:20.477248 kernel: audit: type=1131 audit(1747445780.326:270): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-86.109.9.158:22-147.75.109.163:55778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:36:20.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-86.109.9.158:22-147.75.109.163:55778 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:36:21.775000 audit[4430]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:21.804870 kernel: audit: type=1325 audit(1747445781.775:271): table=filter:89 family=2 entries=15 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:21.775000 audit[4430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe40fcb10 a2=0 a3=1 items=0 ppid=3808 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:21.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:21.888685 kernel: audit: type=1300 audit(1747445781.775:271): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe40fcb10 a2=0 a3=1 items=0 ppid=3808 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:21.888722 kernel: audit: type=1327 audit(1747445781.775:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:21.893000 audit[4430]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:21.893000 audit[4430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe40fcb10 a2=0 a3=1 items=0 ppid=3808 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:21.978746 kernel: audit: type=1325 audit(1747445781.893:272): table=nat:90 family=2 entries=12 op=nft_register_rule pid=4430 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:21.978834 kernel: audit: type=1300 audit(1747445781.893:272): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe40fcb10 a2=0 a3=1 items=0 ppid=3808 pid=4430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:21.893000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:21.988000 audit[4432]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:21.988000 audit[4432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffef4b8d50 a2=0 a3=1 items=0 ppid=3808 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:21.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:21.995000 audit[4432]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=4432 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:21.995000 audit[4432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffef4b8d50 a2=0 a3=1 items=0 ppid=3808 pid=4432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:21.995000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:24.789000 audit[4437]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=4437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:24.789000 audit[4437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd97c1330 a2=0 a3=1 items=0 ppid=3808 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:24.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:24.796000 audit[4437]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=4437 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:24.796000 audit[4437]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd97c1330 a2=0 a3=1 items=0 ppid=3808 pid=4437 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:24.796000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:24.815000 audit[4439]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:24.815000 audit[4439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc2ca7790 a2=0 a3=1 items=0 ppid=3808 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:24.815000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:24.826000 audit[4439]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=4439 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:24.826000 audit[4439]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2ca7790 a2=0 a3=1 items=0 ppid=3808 pid=4439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:24.826000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:24.828639 kubelet[3578]: I0517 01:36:24.828612 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e5d0ed29-a536-4526-8ceb-2e7d0cc537b7-typha-certs\") pod \"calico-typha-5dcb5bd6df-gp24s\" (UID: \"e5d0ed29-a536-4526-8ceb-2e7d0cc537b7\") " pod="calico-system/calico-typha-5dcb5bd6df-gp24s" May 17 01:36:24.828965 kubelet[3578]: I0517 01:36:24.828650 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p5q2\" (UniqueName: \"kubernetes.io/projected/e5d0ed29-a536-4526-8ceb-2e7d0cc537b7-kube-api-access-9p5q2\") pod \"calico-typha-5dcb5bd6df-gp24s\" (UID: \"e5d0ed29-a536-4526-8ceb-2e7d0cc537b7\") " pod="calico-system/calico-typha-5dcb5bd6df-gp24s" May 17 01:36:24.828965 kubelet[3578]: I0517 01:36:24.828675 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5d0ed29-a536-4526-8ceb-2e7d0cc537b7-tigera-ca-bundle\") pod \"calico-typha-5dcb5bd6df-gp24s\" (UID: \"e5d0ed29-a536-4526-8ceb-2e7d0cc537b7\") " pod="calico-system/calico-typha-5dcb5bd6df-gp24s" May 17 01:36:25.103747 env[2382]: time="2025-05-17T01:36:25.103652013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dcb5bd6df-gp24s,Uid:e5d0ed29-a536-4526-8ceb-2e7d0cc537b7,Namespace:calico-system,Attempt:0,}" May 17 01:36:25.111991 env[2382]: time="2025-05-17T01:36:25.111936920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:25.112107 env[2382]: time="2025-05-17T01:36:25.111975080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:25.112107 env[2382]: time="2025-05-17T01:36:25.111985240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:25.112172 env[2382]: time="2025-05-17T01:36:25.112111599Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e7e8798a65a18532d7854bfb73e6dd7cd7bd27765033291b5a42f8dd4cf9989 pid=4448 runtime=io.containerd.runc.v2 May 17 01:36:25.130336 kubelet[3578]: I0517 01:36:25.130310 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-lib-modules\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.130489 kubelet[3578]: I0517 01:36:25.130474 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-policysync\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.130569 kubelet[3578]: I0517 01:36:25.130557 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-tigera-ca-bundle\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.130635 kubelet[3578]: I0517 01:36:25.130623 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-var-run-calico\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.130704 kubelet[3578]: I0517 01:36:25.130693 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-xtables-lock\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.130785 kubelet[3578]: I0517 01:36:25.130768 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vs6\" (UniqueName: \"kubernetes.io/projected/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-kube-api-access-v9vs6\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.130866 kubelet[3578]: I0517 01:36:25.130850 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-cni-bin-dir\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.130948 kubelet[3578]: I0517 01:36:25.130936 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-node-certs\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.131027 kubelet[3578]: I0517 01:36:25.131012 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-var-lib-calico\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.131096 kubelet[3578]: I0517 01:36:25.131083 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-cni-log-dir\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.131163 kubelet[3578]: I0517 01:36:25.131152 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-cni-net-dir\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.131237 kubelet[3578]: I0517 01:36:25.131224 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a5ccdb10-c847-4931-9de1-dce58c5c3f6c-flexvol-driver-host\") pod \"calico-node-9kj5p\" (UID: \"a5ccdb10-c847-4931-9de1-dce58c5c3f6c\") " pod="calico-system/calico-node-9kj5p" May 17 01:36:25.148466 env[2382]: time="2025-05-17T01:36:25.148425688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5dcb5bd6df-gp24s,Uid:e5d0ed29-a536-4526-8ceb-2e7d0cc537b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"0e7e8798a65a18532d7854bfb73e6dd7cd7bd27765033291b5a42f8dd4cf9989\"" May 17 01:36:25.149640 env[2382]: time="2025-05-17T01:36:25.149617000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 17 01:36:25.232980 kubelet[3578]: E0517 01:36:25.232955 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.233089 kubelet[3578]: W0517 01:36:25.233074 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.233206 kubelet[3578]: E0517 01:36:25.233192 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.234877 kubelet[3578]: E0517 01:36:25.234853 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.234971 kubelet[3578]: W0517 01:36:25.234957 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.235027 kubelet[3578]: E0517 01:36:25.235016 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.240741 kubelet[3578]: E0517 01:36:25.240726 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.240847 kubelet[3578]: W0517 01:36:25.240834 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.240919 kubelet[3578]: E0517 01:36:25.240908 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.394993 kubelet[3578]: E0517 01:36:25.394959 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxv7j" podUID="865bbbf9-e470-43da-b4e3-f1b9c7f9d88b" May 17 01:36:25.400963 env[2382]: time="2025-05-17T01:36:25.400932519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kj5p,Uid:a5ccdb10-c847-4931-9de1-dce58c5c3f6c,Namespace:calico-system,Attempt:0,}" May 17 01:36:25.408911 env[2382]: time="2025-05-17T01:36:25.408864509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:25.409018 env[2382]: time="2025-05-17T01:36:25.408902549Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:25.409018 env[2382]: time="2025-05-17T01:36:25.408913149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:25.409089 env[2382]: time="2025-05-17T01:36:25.409047148Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50 pid=4505 runtime=io.containerd.runc.v2 May 17 01:36:25.445242 kubelet[3578]: E0517 01:36:25.443970 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445242 kubelet[3578]: W0517 01:36:25.443993 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445242 kubelet[3578]: E0517 01:36:25.444013 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445242 kubelet[3578]: E0517 01:36:25.444205 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445242 kubelet[3578]: W0517 01:36:25.444212 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445242 kubelet[3578]: E0517 01:36:25.444220 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445242 kubelet[3578]: E0517 01:36:25.444415 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445242 kubelet[3578]: W0517 01:36:25.444425 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445242 kubelet[3578]: E0517 01:36:25.444433 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445242 kubelet[3578]: E0517 01:36:25.444557 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445576 kubelet[3578]: W0517 01:36:25.444563 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445576 kubelet[3578]: E0517 01:36:25.444570 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445576 kubelet[3578]: E0517 01:36:25.444702 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445576 kubelet[3578]: W0517 01:36:25.444709 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445576 kubelet[3578]: E0517 01:36:25.444716 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445576 kubelet[3578]: E0517 01:36:25.444832 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445576 kubelet[3578]: W0517 01:36:25.444838 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445576 kubelet[3578]: E0517 01:36:25.444845 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445576 kubelet[3578]: E0517 01:36:25.444969 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445576 kubelet[3578]: W0517 01:36:25.444974 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445782 kubelet[3578]: E0517 01:36:25.444981 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445782 kubelet[3578]: E0517 01:36:25.445098 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445782 kubelet[3578]: W0517 01:36:25.445104 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445782 kubelet[3578]: E0517 01:36:25.445111 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445782 kubelet[3578]: E0517 01:36:25.445243 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445782 kubelet[3578]: W0517 01:36:25.445249 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445782 kubelet[3578]: E0517 01:36:25.445255 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445782 kubelet[3578]: E0517 01:36:25.445442 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445782 kubelet[3578]: W0517 01:36:25.445448 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445782 kubelet[3578]: E0517 01:36:25.445455 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445997 kubelet[3578]: E0517 01:36:25.445576 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445997 kubelet[3578]: W0517 01:36:25.445582 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445997 kubelet[3578]: E0517 01:36:25.445589 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445997 kubelet[3578]: E0517 01:36:25.445749 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445997 kubelet[3578]: W0517 01:36:25.445755 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445997 kubelet[3578]: E0517 01:36:25.445762 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.445997 kubelet[3578]: E0517 01:36:25.445956 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.445997 kubelet[3578]: W0517 01:36:25.445962 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.445997 kubelet[3578]: E0517 01:36:25.445969 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.446175 kubelet[3578]: E0517 01:36:25.446141 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.446175 kubelet[3578]: W0517 01:36:25.446147 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.446175 kubelet[3578]: E0517 01:36:25.446153 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.446284 kubelet[3578]: E0517 01:36:25.446271 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.446394 kubelet[3578]: W0517 01:36:25.446382 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.446449 kubelet[3578]: E0517 01:36:25.446439 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.446669 kubelet[3578]: E0517 01:36:25.446659 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.446737 kubelet[3578]: W0517 01:36:25.446726 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.446804 kubelet[3578]: E0517 01:36:25.446793 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.447082 kubelet[3578]: E0517 01:36:25.447070 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.447148 kubelet[3578]: W0517 01:36:25.447137 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.447214 kubelet[3578]: E0517 01:36:25.447203 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.447423 kubelet[3578]: E0517 01:36:25.447413 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.447485 kubelet[3578]: W0517 01:36:25.447474 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.447544 kubelet[3578]: E0517 01:36:25.447534 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.447740 kubelet[3578]: E0517 01:36:25.447730 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.447805 kubelet[3578]: W0517 01:36:25.447794 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.447867 kubelet[3578]: E0517 01:36:25.447852 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.448061 kubelet[3578]: E0517 01:36:25.448051 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.448126 kubelet[3578]: W0517 01:36:25.448114 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.448196 kubelet[3578]: E0517 01:36:25.448184 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.451400 env[2382]: time="2025-05-17T01:36:25.451365118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9kj5p,Uid:a5ccdb10-c847-4931-9de1-dce58c5c3f6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50\"" May 17 01:36:25.544552 kubelet[3578]: E0517 01:36:25.544530 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.544673 kubelet[3578]: W0517 01:36:25.544659 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.544753 kubelet[3578]: E0517 01:36:25.544741 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.544830 kubelet[3578]: I0517 01:36:25.544818 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/865bbbf9-e470-43da-b4e3-f1b9c7f9d88b-kubelet-dir\") pod \"csi-node-driver-jxv7j\" (UID: \"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b\") " pod="calico-system/csi-node-driver-jxv7j" May 17 01:36:25.545094 kubelet[3578]: E0517 01:36:25.545070 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.545209 kubelet[3578]: W0517 01:36:25.545193 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.545285 kubelet[3578]: E0517 01:36:25.545272 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.545523 kubelet[3578]: E0517 01:36:25.545512 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.545597 kubelet[3578]: W0517 01:36:25.545585 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.545660 kubelet[3578]: E0517 01:36:25.545649 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.545945 kubelet[3578]: E0517 01:36:25.545926 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.546029 kubelet[3578]: W0517 01:36:25.546016 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.546097 kubelet[3578]: E0517 01:36:25.546084 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.546165 kubelet[3578]: I0517 01:36:25.546152 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdlh\" (UniqueName: \"kubernetes.io/projected/865bbbf9-e470-43da-b4e3-f1b9c7f9d88b-kube-api-access-xrdlh\") pod \"csi-node-driver-jxv7j\" (UID: \"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b\") " pod="calico-system/csi-node-driver-jxv7j" May 17 01:36:25.546456 kubelet[3578]: E0517 01:36:25.546438 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.546560 kubelet[3578]: W0517 01:36:25.546545 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.546625 kubelet[3578]: E0517 01:36:25.546614 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.546921 kubelet[3578]: E0517 01:36:25.546911 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.546987 kubelet[3578]: W0517 01:36:25.546975 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.547045 kubelet[3578]: E0517 01:36:25.547034 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.547275 kubelet[3578]: E0517 01:36:25.547260 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.547365 kubelet[3578]: W0517 01:36:25.547352 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.547419 kubelet[3578]: E0517 01:36:25.547409 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.547491 kubelet[3578]: I0517 01:36:25.547476 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/865bbbf9-e470-43da-b4e3-f1b9c7f9d88b-registration-dir\") pod \"csi-node-driver-jxv7j\" (UID: \"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b\") " pod="calico-system/csi-node-driver-jxv7j" May 17 01:36:25.547737 kubelet[3578]: E0517 01:36:25.547724 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.547818 kubelet[3578]: W0517 01:36:25.547807 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.547880 kubelet[3578]: E0517 01:36:25.547870 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.548113 kubelet[3578]: E0517 01:36:25.548097 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.548189 kubelet[3578]: W0517 01:36:25.548176 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.548261 kubelet[3578]: E0517 01:36:25.548250 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.548326 kubelet[3578]: I0517 01:36:25.548311 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/865bbbf9-e470-43da-b4e3-f1b9c7f9d88b-varrun\") pod \"csi-node-driver-jxv7j\" (UID: \"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b\") " pod="calico-system/csi-node-driver-jxv7j" May 17 01:36:25.548537 kubelet[3578]: E0517 01:36:25.548524 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.548608 kubelet[3578]: W0517 01:36:25.548597 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.548668 kubelet[3578]: E0517 01:36:25.548658 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.548895 kubelet[3578]: E0517 01:36:25.548884 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.548963 kubelet[3578]: W0517 01:36:25.548952 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.549017 kubelet[3578]: E0517 01:36:25.549007 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.549295 kubelet[3578]: E0517 01:36:25.549284 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.549359 kubelet[3578]: W0517 01:36:25.549348 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.549431 kubelet[3578]: E0517 01:36:25.549421 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.549731 kubelet[3578]: E0517 01:36:25.549720 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.549804 kubelet[3578]: W0517 01:36:25.549792 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.549864 kubelet[3578]: E0517 01:36:25.549850 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.549941 kubelet[3578]: I0517 01:36:25.549926 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/865bbbf9-e470-43da-b4e3-f1b9c7f9d88b-socket-dir\") pod \"csi-node-driver-jxv7j\" (UID: \"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b\") " pod="calico-system/csi-node-driver-jxv7j" May 17 01:36:25.550169 kubelet[3578]: E0517 01:36:25.550150 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.550255 kubelet[3578]: W0517 01:36:25.550242 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.550309 kubelet[3578]: E0517 01:36:25.550299 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.550594 kubelet[3578]: E0517 01:36:25.550583 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.550669 kubelet[3578]: W0517 01:36:25.550659 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.550721 kubelet[3578]: E0517 01:36:25.550711 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.651336 kubelet[3578]: E0517 01:36:25.651268 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.651435 kubelet[3578]: W0517 01:36:25.651421 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.651505 kubelet[3578]: E0517 01:36:25.651493 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.651775 kubelet[3578]: E0517 01:36:25.651763 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.651864 kubelet[3578]: W0517 01:36:25.651845 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.651927 kubelet[3578]: E0517 01:36:25.651915 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.652149 kubelet[3578]: E0517 01:36:25.652137 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.652221 kubelet[3578]: W0517 01:36:25.652209 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.652297 kubelet[3578]: E0517 01:36:25.652285 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.652551 kubelet[3578]: E0517 01:36:25.652539 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.652682 kubelet[3578]: W0517 01:36:25.652668 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.652750 kubelet[3578]: E0517 01:36:25.652739 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.653009 kubelet[3578]: E0517 01:36:25.652988 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.653102 kubelet[3578]: W0517 01:36:25.653088 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.653183 kubelet[3578]: E0517 01:36:25.653171 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.653413 kubelet[3578]: E0517 01:36:25.653393 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.653515 kubelet[3578]: W0517 01:36:25.653501 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.653588 kubelet[3578]: E0517 01:36:25.653577 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.653804 kubelet[3578]: E0517 01:36:25.653792 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.653887 kubelet[3578]: W0517 01:36:25.653873 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.653968 kubelet[3578]: E0517 01:36:25.653946 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.654108 kubelet[3578]: E0517 01:36:25.654095 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.654190 kubelet[3578]: W0517 01:36:25.654178 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.654266 kubelet[3578]: E0517 01:36:25.654250 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.654406 kubelet[3578]: E0517 01:36:25.654393 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.654477 kubelet[3578]: W0517 01:36:25.654465 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.654547 kubelet[3578]: E0517 01:36:25.654534 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.654716 kubelet[3578]: E0517 01:36:25.654703 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.654794 kubelet[3578]: W0517 01:36:25.654780 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.654878 kubelet[3578]: E0517 01:36:25.654856 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.655012 kubelet[3578]: E0517 01:36:25.654998 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.655084 kubelet[3578]: W0517 01:36:25.655072 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.655168 kubelet[3578]: E0517 01:36:25.655154 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.655336 kubelet[3578]: E0517 01:36:25.655322 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.655417 kubelet[3578]: W0517 01:36:25.655406 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.655490 kubelet[3578]: E0517 01:36:25.655476 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.655655 kubelet[3578]: E0517 01:36:25.655642 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.655728 kubelet[3578]: W0517 01:36:25.655716 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.655792 kubelet[3578]: E0517 01:36:25.655779 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.655939 kubelet[3578]: E0517 01:36:25.655925 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.656012 kubelet[3578]: W0517 01:36:25.656000 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.656091 kubelet[3578]: E0517 01:36:25.656078 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.656325 kubelet[3578]: E0517 01:36:25.656311 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.656403 kubelet[3578]: W0517 01:36:25.656391 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.656468 kubelet[3578]: E0517 01:36:25.656455 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.656702 kubelet[3578]: E0517 01:36:25.656690 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.656772 kubelet[3578]: W0517 01:36:25.656760 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.656843 kubelet[3578]: E0517 01:36:25.656830 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.657005 kubelet[3578]: E0517 01:36:25.656991 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.657079 kubelet[3578]: W0517 01:36:25.657068 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.657151 kubelet[3578]: E0517 01:36:25.657137 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.657277 kubelet[3578]: E0517 01:36:25.657265 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.657346 kubelet[3578]: W0517 01:36:25.657335 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.657423 kubelet[3578]: E0517 01:36:25.657403 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.657550 kubelet[3578]: E0517 01:36:25.657537 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.657622 kubelet[3578]: W0517 01:36:25.657610 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.657691 kubelet[3578]: E0517 01:36:25.657676 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.657826 kubelet[3578]: E0517 01:36:25.657813 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.657915 kubelet[3578]: W0517 01:36:25.657902 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.657989 kubelet[3578]: E0517 01:36:25.657974 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.658176 kubelet[3578]: E0517 01:36:25.658161 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.658248 kubelet[3578]: W0517 01:36:25.658236 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.658318 kubelet[3578]: E0517 01:36:25.658303 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.658474 kubelet[3578]: E0517 01:36:25.658461 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.658554 kubelet[3578]: W0517 01:36:25.658542 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.658618 kubelet[3578]: E0517 01:36:25.658607 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.658852 kubelet[3578]: E0517 01:36:25.658837 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.658952 kubelet[3578]: W0517 01:36:25.658938 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.659012 kubelet[3578]: E0517 01:36:25.659002 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.659234 kubelet[3578]: E0517 01:36:25.659220 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.659318 kubelet[3578]: W0517 01:36:25.659305 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.659385 kubelet[3578]: E0517 01:36:25.659374 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.659619 kubelet[3578]: E0517 01:36:25.659605 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.659703 kubelet[3578]: W0517 01:36:25.659689 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.659764 kubelet[3578]: E0517 01:36:25.659752 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.666113 kubelet[3578]: E0517 01:36:25.666095 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:25.666207 kubelet[3578]: W0517 01:36:25.666194 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:25.666280 kubelet[3578]: E0517 01:36:25.666268 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:25.849000 audit[4602]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:25.861690 kernel: kauditd_printk_skb: 19 callbacks suppressed May 17 01:36:25.861778 kernel: audit: type=1325 audit(1747445785.849:279): table=filter:97 family=2 entries=20 op=nft_register_rule pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:25.849000 audit[4602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc37cacb0 a2=0 a3=1 items=0 ppid=3808 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:25.889898 kernel: audit: type=1300 audit(1747445785.849:279): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc37cacb0 a2=0 a3=1 items=0 ppid=3808 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:25.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:25.973413 kernel: audit: type=1327 audit(1747445785.849:279): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:25.973000 audit[4602]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:25.973000 audit[4602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc37cacb0 a2=0 a3=1 items=0 ppid=3808 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:26.058400 kernel: audit: type=1325 audit(1747445785.973:280): table=nat:98 family=2 entries=12 op=nft_register_rule pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:26.058429 kernel: audit: type=1300 audit(1747445785.973:280): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc37cacb0 a2=0 a3=1 items=0 ppid=3808 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:25.973000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:26.085819 kernel: audit: type=1327 audit(1747445785.973:280): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:26.645923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1045495316.mount: Deactivated successfully. May 17 01:36:26.816214 kubelet[3578]: E0517 01:36:26.816176 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxv7j" podUID="865bbbf9-e470-43da-b4e3-f1b9c7f9d88b" May 17 01:36:27.285647 env[2382]: time="2025-05-17T01:36:27.285614454Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:27.287119 env[2382]: time="2025-05-17T01:36:27.287090566Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:27.288174 env[2382]: time="2025-05-17T01:36:27.288148840Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:27.288669 env[2382]: time="2025-05-17T01:36:27.288644757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 17 01:36:27.289259 env[2382]: time="2025-05-17T01:36:27.289233354Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:27.289588 env[2382]: time="2025-05-17T01:36:27.289561392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 01:36:27.294923 env[2382]: time="2025-05-17T01:36:27.294892962Z" level=info msg="CreateContainer within sandbox \"0e7e8798a65a18532d7854bfb73e6dd7cd7bd27765033291b5a42f8dd4cf9989\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 17 01:36:27.298412 env[2382]: time="2025-05-17T01:36:27.298380343Z" level=info msg="CreateContainer within sandbox \"0e7e8798a65a18532d7854bfb73e6dd7cd7bd27765033291b5a42f8dd4cf9989\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d9cac5c8e56e46fdff0dc6c134ccec11ba98b00f509084ce7f9fa3be98c3ec16\"" May 17 01:36:27.298801 env[2382]: time="2025-05-17T01:36:27.298772420Z" level=info msg="StartContainer for \"d9cac5c8e56e46fdff0dc6c134ccec11ba98b00f509084ce7f9fa3be98c3ec16\"" May 17 01:36:27.342495 env[2382]: time="2025-05-17T01:36:27.342457656Z" level=info msg="StartContainer for \"d9cac5c8e56e46fdff0dc6c134ccec11ba98b00f509084ce7f9fa3be98c3ec16\" returns successfully" May 17 01:36:27.863892 kubelet[3578]: E0517 01:36:27.863863 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.864251 kubelet[3578]: W0517 01:36:27.864231 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.864333 kubelet[3578]: E0517 01:36:27.864317 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.864557 kubelet[3578]: E0517 01:36:27.864545 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.864632 kubelet[3578]: W0517 01:36:27.864619 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.864694 kubelet[3578]: E0517 01:36:27.864683 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.864907 kubelet[3578]: E0517 01:36:27.864896 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.864993 kubelet[3578]: W0517 01:36:27.864981 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.865058 kubelet[3578]: E0517 01:36:27.865047 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.865324 kubelet[3578]: E0517 01:36:27.865312 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.865399 kubelet[3578]: W0517 01:36:27.865385 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.865460 kubelet[3578]: E0517 01:36:27.865449 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.865573 kubelet[3578]: I0517 01:36:27.865466 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5dcb5bd6df-gp24s" podStartSLOduration=1.725438496 podStartE2EDuration="3.865452768s" podCreationTimestamp="2025-05-17 01:36:24 +0000 UTC" firstStartedPulling="2025-05-17 01:36:25.149402961 +0000 UTC m=+19.403767282" lastFinishedPulling="2025-05-17 01:36:27.289417233 +0000 UTC m=+21.543781554" observedRunningTime="2025-05-17 01:36:27.86522573 +0000 UTC m=+22.119590051" watchObservedRunningTime="2025-05-17 01:36:27.865452768 +0000 UTC m=+22.119817089" May 17 01:36:27.865768 kubelet[3578]: E0517 01:36:27.865666 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.865865 kubelet[3578]: W0517 01:36:27.865843 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.865926 kubelet[3578]: E0517 01:36:27.865915 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.866136 kubelet[3578]: E0517 01:36:27.866124 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.866221 kubelet[3578]: W0517 01:36:27.866208 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.866279 kubelet[3578]: E0517 01:36:27.866269 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.866512 kubelet[3578]: E0517 01:36:27.866501 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.866590 kubelet[3578]: W0517 01:36:27.866578 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.866642 kubelet[3578]: E0517 01:36:27.866632 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.866897 kubelet[3578]: E0517 01:36:27.866885 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.866969 kubelet[3578]: W0517 01:36:27.866957 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.867023 kubelet[3578]: E0517 01:36:27.867012 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.867334 kubelet[3578]: E0517 01:36:27.867322 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.867421 kubelet[3578]: W0517 01:36:27.867408 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.867479 kubelet[3578]: E0517 01:36:27.867466 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.867698 kubelet[3578]: E0517 01:36:27.867687 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.867767 kubelet[3578]: W0517 01:36:27.867754 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.867827 kubelet[3578]: E0517 01:36:27.867816 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.868063 kubelet[3578]: E0517 01:36:27.868052 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.868149 kubelet[3578]: W0517 01:36:27.868136 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.868206 kubelet[3578]: E0517 01:36:27.868195 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.868516 kubelet[3578]: E0517 01:36:27.868505 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.868587 kubelet[3578]: W0517 01:36:27.868575 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.868639 kubelet[3578]: E0517 01:36:27.868628 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.868968 kubelet[3578]: E0517 01:36:27.868956 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.869043 kubelet[3578]: W0517 01:36:27.869031 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.869096 kubelet[3578]: E0517 01:36:27.869085 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.869409 kubelet[3578]: E0517 01:36:27.869398 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.869480 kubelet[3578]: W0517 01:36:27.869467 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.869606 kubelet[3578]: E0517 01:36:27.869593 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.869943 kubelet[3578]: E0517 01:36:27.869931 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.870038 kubelet[3578]: W0517 01:36:27.870024 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.870094 kubelet[3578]: E0517 01:36:27.870083 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.870420 kubelet[3578]: E0517 01:36:27.870409 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.870498 kubelet[3578]: W0517 01:36:27.870485 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.870552 kubelet[3578]: E0517 01:36:27.870542 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.870862 kubelet[3578]: E0517 01:36:27.870846 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.870960 kubelet[3578]: W0517 01:36:27.870946 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.871021 kubelet[3578]: E0517 01:36:27.871010 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.871312 kubelet[3578]: E0517 01:36:27.871293 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.871404 kubelet[3578]: W0517 01:36:27.871390 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.871468 kubelet[3578]: E0517 01:36:27.871457 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.871782 kubelet[3578]: E0517 01:36:27.871762 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.871897 kubelet[3578]: W0517 01:36:27.871882 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.871966 kubelet[3578]: E0517 01:36:27.871952 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.872276 kubelet[3578]: E0517 01:36:27.872263 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.872350 kubelet[3578]: W0517 01:36:27.872337 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.872417 kubelet[3578]: E0517 01:36:27.872405 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.872677 kubelet[3578]: E0517 01:36:27.872665 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.872756 kubelet[3578]: W0517 01:36:27.872742 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.872814 kubelet[3578]: E0517 01:36:27.872804 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.873109 kubelet[3578]: E0517 01:36:27.873097 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.873193 kubelet[3578]: W0517 01:36:27.873180 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.873261 kubelet[3578]: E0517 01:36:27.873245 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.873477 kubelet[3578]: E0517 01:36:27.873463 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.873556 kubelet[3578]: W0517 01:36:27.873544 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.873624 kubelet[3578]: E0517 01:36:27.873609 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.873849 kubelet[3578]: E0517 01:36:27.873836 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.873924 kubelet[3578]: W0517 01:36:27.873912 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.874002 kubelet[3578]: E0517 01:36:27.873988 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.874262 kubelet[3578]: E0517 01:36:27.874248 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.874346 kubelet[3578]: W0517 01:36:27.874333 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.874410 kubelet[3578]: E0517 01:36:27.874395 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.874717 kubelet[3578]: E0517 01:36:27.874702 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.874792 kubelet[3578]: W0517 01:36:27.874779 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.874853 kubelet[3578]: E0517 01:36:27.874843 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.875126 kubelet[3578]: E0517 01:36:27.875111 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.875223 kubelet[3578]: W0517 01:36:27.875209 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.875283 kubelet[3578]: E0517 01:36:27.875272 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.875580 kubelet[3578]: E0517 01:36:27.875569 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.875658 kubelet[3578]: W0517 01:36:27.875645 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.875715 kubelet[3578]: E0517 01:36:27.875704 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.875984 kubelet[3578]: E0517 01:36:27.875972 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.876071 kubelet[3578]: W0517 01:36:27.876057 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.876138 kubelet[3578]: E0517 01:36:27.876123 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.876374 kubelet[3578]: E0517 01:36:27.876360 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.876450 kubelet[3578]: W0517 01:36:27.876438 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.876527 kubelet[3578]: E0517 01:36:27.876513 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.876777 kubelet[3578]: E0517 01:36:27.876763 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.876856 kubelet[3578]: W0517 01:36:27.876844 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.876943 kubelet[3578]: E0517 01:36:27.876917 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.877148 kubelet[3578]: E0517 01:36:27.877136 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.877218 kubelet[3578]: W0517 01:36:27.877205 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.877276 kubelet[3578]: E0517 01:36:27.877265 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:27.877484 kubelet[3578]: E0517 01:36:27.877467 3578 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 01:36:27.877596 kubelet[3578]: W0517 01:36:27.877581 3578 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 01:36:27.877659 kubelet[3578]: E0517 01:36:27.877647 3578 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 01:36:28.482689 env[2382]: time="2025-05-17T01:36:28.482648842Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:28.484188 env[2382]: time="2025-05-17T01:36:28.484156234Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:28.485253 env[2382]: time="2025-05-17T01:36:28.485226229Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:28.485789 env[2382]: time="2025-05-17T01:36:28.485762426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 17 01:36:28.486377 env[2382]: time="2025-05-17T01:36:28.486349943Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:28.487589 env[2382]: time="2025-05-17T01:36:28.487560736Z" level=info msg="CreateContainer within sandbox \"e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 01:36:28.492289 env[2382]: time="2025-05-17T01:36:28.492256832Z" level=info msg="CreateContainer within sandbox \"e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cb4a55a69035509262605cc7e5c7e9a6a1344c29df3a03b0030f777d47ad089c\"" May 17 01:36:28.492680 env[2382]: time="2025-05-17T01:36:28.492656630Z" level=info msg="StartContainer for \"cb4a55a69035509262605cc7e5c7e9a6a1344c29df3a03b0030f777d47ad089c\"" May 17 01:36:28.532419 env[2382]: time="2025-05-17T01:36:28.532375381Z" level=info msg="StartContainer for \"cb4a55a69035509262605cc7e5c7e9a6a1344c29df3a03b0030f777d47ad089c\" returns successfully" May 17 01:36:28.630081 env[2382]: time="2025-05-17T01:36:28.630039829Z" level=info msg="shim disconnected" id=cb4a55a69035509262605cc7e5c7e9a6a1344c29df3a03b0030f777d47ad089c May 17 01:36:28.630285 env[2382]: time="2025-05-17T01:36:28.630267707Z" level=warning msg="cleaning up after shim disconnected" id=cb4a55a69035509262605cc7e5c7e9a6a1344c29df3a03b0030f777d47ad089c namespace=k8s.io May 17 01:36:28.630346 env[2382]: time="2025-05-17T01:36:28.630333307Z" level=info msg="cleaning up dead shim" May 17 01:36:28.636528 env[2382]: time="2025-05-17T01:36:28.636500355Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4752 runtime=io.containerd.runc.v2\n" May 17 01:36:28.815983 kubelet[3578]: E0517 01:36:28.815897 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxv7j" podUID="865bbbf9-e470-43da-b4e3-f1b9c7f9d88b" May 17 01:36:28.859925 kubelet[3578]: I0517 01:36:28.859881 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:36:28.861296 env[2382]: time="2025-05-17T01:36:28.861270055Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 01:36:29.293746 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb4a55a69035509262605cc7e5c7e9a6a1344c29df3a03b0030f777d47ad089c-rootfs.mount: Deactivated successfully. May 17 01:36:30.815654 kubelet[3578]: E0517 01:36:30.815609 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jxv7j" podUID="865bbbf9-e470-43da-b4e3-f1b9c7f9d88b" May 17 01:36:31.339258 env[2382]: time="2025-05-17T01:36:31.339218728Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:31.339992 env[2382]: time="2025-05-17T01:36:31.339962084Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:31.341158 env[2382]: time="2025-05-17T01:36:31.341133239Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:31.342175 env[2382]: time="2025-05-17T01:36:31.342157275Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:31.342722 env[2382]: time="2025-05-17T01:36:31.342705113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 17 01:36:31.344447 env[2382]: time="2025-05-17T01:36:31.344423825Z" level=info msg="CreateContainer within sandbox \"e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 01:36:31.349883 env[2382]: time="2025-05-17T01:36:31.349847562Z" level=info msg="CreateContainer within sandbox \"e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8b53c0bfdb26f09d67565238fc0bab1069f96e529abee1340da3190eaf07d607\"" May 17 01:36:31.350324 env[2382]: time="2025-05-17T01:36:31.350300160Z" level=info msg="StartContainer for \"8b53c0bfdb26f09d67565238fc0bab1069f96e529abee1340da3190eaf07d607\"" May 17 01:36:31.392237 env[2382]: time="2025-05-17T01:36:31.392198539Z" level=info msg="StartContainer for \"8b53c0bfdb26f09d67565238fc0bab1069f96e529abee1340da3190eaf07d607\" returns successfully" May 17 01:36:31.800552 env[2382]: time="2025-05-17T01:36:31.800503653Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 01:36:31.882211 kubelet[3578]: I0517 01:36:31.882186 3578 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 01:36:31.898481 kubelet[3578]: I0517 01:36:31.898447 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zsj7\" (UniqueName: \"kubernetes.io/projected/90e3149c-0a7a-40ac-8123-c0855cbea15e-kube-api-access-5zsj7\") pod \"whisker-88b655df4-xc9hf\" (UID: \"90e3149c-0a7a-40ac-8123-c0855cbea15e\") " pod="calico-system/whisker-88b655df4-xc9hf" May 17 01:36:31.898585 kubelet[3578]: I0517 01:36:31.898491 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4fc78cc-d2a7-4712-9751-e0c08da51294-config-volume\") pod \"coredns-7c65d6cfc9-nf2f9\" (UID: \"a4fc78cc-d2a7-4712-9751-e0c08da51294\") " pod="kube-system/coredns-7c65d6cfc9-nf2f9" May 17 01:36:31.898585 kubelet[3578]: I0517 01:36:31.898537 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4bca88a-aef7-4116-a834-44caa9bfe63e-tigera-ca-bundle\") pod \"calico-kube-controllers-54b699d664-blxw6\" (UID: \"c4bca88a-aef7-4116-a834-44caa9bfe63e\") " pod="calico-system/calico-kube-controllers-54b699d664-blxw6" May 17 01:36:31.898585 kubelet[3578]: I0517 01:36:31.898558 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlgrn\" (UniqueName: \"kubernetes.io/projected/c4bca88a-aef7-4116-a834-44caa9bfe63e-kube-api-access-xlgrn\") pod \"calico-kube-controllers-54b699d664-blxw6\" (UID: \"c4bca88a-aef7-4116-a834-44caa9bfe63e\") " pod="calico-system/calico-kube-controllers-54b699d664-blxw6" May 17 01:36:31.898692 kubelet[3578]: I0517 01:36:31.898669 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk4w9\" (UniqueName: \"kubernetes.io/projected/a4fc78cc-d2a7-4712-9751-e0c08da51294-kube-api-access-qk4w9\") pod \"coredns-7c65d6cfc9-nf2f9\" (UID: \"a4fc78cc-d2a7-4712-9751-e0c08da51294\") " pod="kube-system/coredns-7c65d6cfc9-nf2f9" May 17 01:36:31.898723 kubelet[3578]: I0517 01:36:31.898711 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-backend-key-pair\") pod \"whisker-88b655df4-xc9hf\" (UID: \"90e3149c-0a7a-40ac-8123-c0855cbea15e\") " pod="calico-system/whisker-88b655df4-xc9hf" May 17 01:36:31.898750 kubelet[3578]: I0517 01:36:31.898732 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-ca-bundle\") pod \"whisker-88b655df4-xc9hf\" (UID: \"90e3149c-0a7a-40ac-8123-c0855cbea15e\") " pod="calico-system/whisker-88b655df4-xc9hf" May 17 01:36:31.929037 env[2382]: time="2025-05-17T01:36:31.928990698Z" level=info msg="shim disconnected" id=8b53c0bfdb26f09d67565238fc0bab1069f96e529abee1340da3190eaf07d607 May 17 01:36:31.929110 env[2382]: time="2025-05-17T01:36:31.929036297Z" level=warning msg="cleaning up after shim disconnected" id=8b53c0bfdb26f09d67565238fc0bab1069f96e529abee1340da3190eaf07d607 namespace=k8s.io May 17 01:36:31.929110 env[2382]: time="2025-05-17T01:36:31.929048217Z" level=info msg="cleaning up dead shim" May 17 01:36:31.934727 env[2382]: time="2025-05-17T01:36:31.934700233Z" level=warning msg="cleanup warnings time=\"2025-05-17T01:36:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4854 runtime=io.containerd.runc.v2\n" May 17 01:36:31.998992 kubelet[3578]: I0517 01:36:31.998950 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2gnj\" (UniqueName: \"kubernetes.io/projected/cbe86ea2-e6e9-4576-adbf-bcd51e11fb88-kube-api-access-h2gnj\") pod \"coredns-7c65d6cfc9-sxm46\" (UID: \"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88\") " pod="kube-system/coredns-7c65d6cfc9-sxm46" May 17 01:36:31.999080 kubelet[3578]: I0517 01:36:31.999020 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8bfv\" (UniqueName: \"kubernetes.io/projected/66187612-efc7-4292-8594-c39d74617f30-kube-api-access-j8bfv\") pod \"calico-apiserver-889cbc855-tdbbg\" (UID: \"66187612-efc7-4292-8594-c39d74617f30\") " pod="calico-apiserver/calico-apiserver-889cbc855-tdbbg" May 17 01:36:31.999080 kubelet[3578]: I0517 01:36:31.999039 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f14c134-cd19-42e5-bbc4-b6bd9f685215-calico-apiserver-certs\") pod \"calico-apiserver-889cbc855-v2bvs\" (UID: \"2f14c134-cd19-42e5-bbc4-b6bd9f685215\") " pod="calico-apiserver/calico-apiserver-889cbc855-v2bvs" May 17 01:36:31.999183 kubelet[3578]: I0517 01:36:31.999144 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z44fh\" (UniqueName: \"kubernetes.io/projected/2f14c134-cd19-42e5-bbc4-b6bd9f685215-kube-api-access-z44fh\") pod \"calico-apiserver-889cbc855-v2bvs\" (UID: \"2f14c134-cd19-42e5-bbc4-b6bd9f685215\") " pod="calico-apiserver/calico-apiserver-889cbc855-v2bvs" May 17 01:36:31.999234 kubelet[3578]: I0517 01:36:31.999211 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4371095-cd92-4ea8-9564-c86ac0de3064-config\") pod \"goldmane-8f77d7b6c-czxt2\" (UID: \"c4371095-cd92-4ea8-9564-c86ac0de3064\") " pod="calico-system/goldmane-8f77d7b6c-czxt2" May 17 01:36:31.999276 kubelet[3578]: I0517 01:36:31.999244 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnmns\" (UniqueName: \"kubernetes.io/projected/c4371095-cd92-4ea8-9564-c86ac0de3064-kube-api-access-pnmns\") pod \"goldmane-8f77d7b6c-czxt2\" (UID: \"c4371095-cd92-4ea8-9564-c86ac0de3064\") " pod="calico-system/goldmane-8f77d7b6c-czxt2" May 17 01:36:31.999323 kubelet[3578]: I0517 01:36:31.999309 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/66187612-efc7-4292-8594-c39d74617f30-calico-apiserver-certs\") pod \"calico-apiserver-889cbc855-tdbbg\" (UID: \"66187612-efc7-4292-8594-c39d74617f30\") " pod="calico-apiserver/calico-apiserver-889cbc855-tdbbg" May 17 01:36:31.999351 kubelet[3578]: I0517 01:36:31.999331 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c4371095-cd92-4ea8-9564-c86ac0de3064-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-czxt2\" (UID: \"c4371095-cd92-4ea8-9564-c86ac0de3064\") " pod="calico-system/goldmane-8f77d7b6c-czxt2" May 17 01:36:31.999377 kubelet[3578]: I0517 01:36:31.999349 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c4371095-cd92-4ea8-9564-c86ac0de3064-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-czxt2\" (UID: \"c4371095-cd92-4ea8-9564-c86ac0de3064\") " pod="calico-system/goldmane-8f77d7b6c-czxt2" May 17 01:36:31.999435 kubelet[3578]: I0517 01:36:31.999413 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbe86ea2-e6e9-4576-adbf-bcd51e11fb88-config-volume\") pod \"coredns-7c65d6cfc9-sxm46\" (UID: \"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88\") " pod="kube-system/coredns-7c65d6cfc9-sxm46" May 17 01:36:32.204997 env[2382]: time="2025-05-17T01:36:32.204961319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88b655df4-xc9hf,Uid:90e3149c-0a7a-40ac-8123-c0855cbea15e,Namespace:calico-system,Attempt:0,}" May 17 01:36:32.205141 env[2382]: time="2025-05-17T01:36:32.205116039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf2f9,Uid:a4fc78cc-d2a7-4712-9751-e0c08da51294,Namespace:kube-system,Attempt:0,}" May 17 01:36:32.205224 env[2382]: time="2025-05-17T01:36:32.205199158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-v2bvs,Uid:2f14c134-cd19-42e5-bbc4-b6bd9f685215,Namespace:calico-apiserver,Attempt:0,}" May 17 01:36:32.205305 env[2382]: time="2025-05-17T01:36:32.205120799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b699d664-blxw6,Uid:c4bca88a-aef7-4116-a834-44caa9bfe63e,Namespace:calico-system,Attempt:0,}" May 17 01:36:32.207725 env[2382]: time="2025-05-17T01:36:32.207701988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-czxt2,Uid:c4371095-cd92-4ea8-9564-c86ac0de3064,Namespace:calico-system,Attempt:0,}" May 17 01:36:32.207812 env[2382]: time="2025-05-17T01:36:32.207786068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-tdbbg,Uid:66187612-efc7-4292-8594-c39d74617f30,Namespace:calico-apiserver,Attempt:0,}" May 17 01:36:32.209193 env[2382]: time="2025-05-17T01:36:32.209171542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sxm46,Uid:cbe86ea2-e6e9-4576-adbf-bcd51e11fb88,Namespace:kube-system,Attempt:0,}" May 17 01:36:32.263324 env[2382]: time="2025-05-17T01:36:32.263244483Z" level=error msg="Failed to destroy network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.263736 env[2382]: time="2025-05-17T01:36:32.263709001Z" level=error msg="encountered an error cleaning up failed sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.263776 env[2382]: time="2025-05-17T01:36:32.263755881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-88b655df4-xc9hf,Uid:90e3149c-0a7a-40ac-8123-c0855cbea15e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264005 kubelet[3578]: E0517 01:36:32.263956 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264053 kubelet[3578]: E0517 01:36:32.264042 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-88b655df4-xc9hf" May 17 01:36:32.264087 kubelet[3578]: E0517 01:36:32.264061 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-88b655df4-xc9hf" May 17 01:36:32.264114 env[2382]: time="2025-05-17T01:36:32.264083080Z" level=error msg="Failed to destroy network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264139 kubelet[3578]: E0517 01:36:32.264101 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-88b655df4-xc9hf_calico-system(90e3149c-0a7a-40ac-8123-c0855cbea15e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-88b655df4-xc9hf_calico-system(90e3149c-0a7a-40ac-8123-c0855cbea15e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-88b655df4-xc9hf" podUID="90e3149c-0a7a-40ac-8123-c0855cbea15e" May 17 01:36:32.264386 env[2382]: time="2025-05-17T01:36:32.264347159Z" level=error msg="Failed to destroy network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264469 env[2382]: time="2025-05-17T01:36:32.264437758Z" level=error msg="encountered an error cleaning up failed sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264516 env[2382]: time="2025-05-17T01:36:32.264487838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-v2bvs,Uid:2f14c134-cd19-42e5-bbc4-b6bd9f685215,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264617 kubelet[3578]: E0517 01:36:32.264594 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264648 kubelet[3578]: E0517 01:36:32.264633 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-889cbc855-v2bvs" May 17 01:36:32.264672 env[2382]: time="2025-05-17T01:36:32.264599918Z" level=error msg="Failed to destroy network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264672 env[2382]: time="2025-05-17T01:36:32.264616838Z" level=error msg="Failed to destroy network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264743 kubelet[3578]: E0517 01:36:32.264651 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-889cbc855-v2bvs" May 17 01:36:32.264743 kubelet[3578]: E0517 01:36:32.264684 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-889cbc855-v2bvs_calico-apiserver(2f14c134-cd19-42e5-bbc4-b6bd9f685215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-889cbc855-v2bvs_calico-apiserver(2f14c134-cd19-42e5-bbc4-b6bd9f685215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-889cbc855-v2bvs" podUID="2f14c134-cd19-42e5-bbc4-b6bd9f685215" May 17 01:36:32.264809 env[2382]: time="2025-05-17T01:36:32.264759797Z" level=error msg="encountered an error cleaning up failed sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264835 env[2382]: time="2025-05-17T01:36:32.264800157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sxm46,Uid:cbe86ea2-e6e9-4576-adbf-bcd51e11fb88,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264963 kubelet[3578]: E0517 01:36:32.264935 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.264994 kubelet[3578]: E0517 01:36:32.264978 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sxm46" May 17 01:36:32.265025 env[2382]: time="2025-05-17T01:36:32.264958756Z" level=error msg="encountered an error cleaning up failed sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265025 env[2382]: time="2025-05-17T01:36:32.264980316Z" level=error msg="encountered an error cleaning up failed sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265025 env[2382]: time="2025-05-17T01:36:32.264996676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-czxt2,Uid:c4371095-cd92-4ea8-9564-c86ac0de3064,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265107 kubelet[3578]: E0517 01:36:32.264997 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-sxm46" May 17 01:36:32.265107 kubelet[3578]: E0517 01:36:32.265033 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-sxm46_kube-system(cbe86ea2-e6e9-4576-adbf-bcd51e11fb88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-sxm46_kube-system(cbe86ea2-e6e9-4576-adbf-bcd51e11fb88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sxm46" podUID="cbe86ea2-e6e9-4576-adbf-bcd51e11fb88" May 17 01:36:32.265107 kubelet[3578]: E0517 01:36:32.265094 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265241 env[2382]: time="2025-05-17T01:36:32.265017836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b699d664-blxw6,Uid:c4bca88a-aef7-4116-a834-44caa9bfe63e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265241 env[2382]: time="2025-05-17T01:36:32.265151915Z" level=error msg="Failed to destroy network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265313 kubelet[3578]: E0517 01:36:32.265126 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-czxt2" May 17 01:36:32.265313 kubelet[3578]: E0517 01:36:32.265141 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-czxt2" May 17 01:36:32.265313 kubelet[3578]: E0517 01:36:32.265095 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265313 kubelet[3578]: E0517 01:36:32.265192 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b699d664-blxw6" May 17 01:36:32.265407 kubelet[3578]: E0517 01:36:32.265207 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54b699d664-blxw6" May 17 01:36:32.265407 kubelet[3578]: E0517 01:36:32.265164 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:36:32.265468 kubelet[3578]: E0517 01:36:32.265235 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54b699d664-blxw6_calico-system(c4bca88a-aef7-4116-a834-44caa9bfe63e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54b699d664-blxw6_calico-system(c4bca88a-aef7-4116-a834-44caa9bfe63e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b699d664-blxw6" podUID="c4bca88a-aef7-4116-a834-44caa9bfe63e" May 17 01:36:32.265509 env[2382]: time="2025-05-17T01:36:32.265458874Z" level=error msg="encountered an error cleaning up failed sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265534 env[2382]: time="2025-05-17T01:36:32.265499874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-tdbbg,Uid:66187612-efc7-4292-8594-c39d74617f30,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265622 kubelet[3578]: E0517 01:36:32.265607 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.265651 kubelet[3578]: E0517 01:36:32.265630 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-889cbc855-tdbbg" May 17 01:36:32.265651 kubelet[3578]: E0517 01:36:32.265643 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-889cbc855-tdbbg" May 17 01:36:32.265702 kubelet[3578]: E0517 01:36:32.265671 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-889cbc855-tdbbg_calico-apiserver(66187612-efc7-4292-8594-c39d74617f30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-889cbc855-tdbbg_calico-apiserver(66187612-efc7-4292-8594-c39d74617f30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-889cbc855-tdbbg" podUID="66187612-efc7-4292-8594-c39d74617f30" May 17 01:36:32.266255 env[2382]: time="2025-05-17T01:36:32.266227311Z" level=error msg="Failed to destroy network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.266543 env[2382]: time="2025-05-17T01:36:32.266520870Z" level=error msg="encountered an error cleaning up failed sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.266577 env[2382]: time="2025-05-17T01:36:32.266558790Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf2f9,Uid:a4fc78cc-d2a7-4712-9751-e0c08da51294,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.266680 kubelet[3578]: E0517 01:36:32.266662 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.266707 kubelet[3578]: E0517 01:36:32.266694 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nf2f9" May 17 01:36:32.266734 kubelet[3578]: E0517 01:36:32.266711 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nf2f9" May 17 01:36:32.266758 kubelet[3578]: E0517 01:36:32.266743 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nf2f9_kube-system(a4fc78cc-d2a7-4712-9751-e0c08da51294)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nf2f9_kube-system(a4fc78cc-d2a7-4712-9751-e0c08da51294)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nf2f9" podUID="a4fc78cc-d2a7-4712-9751-e0c08da51294" May 17 01:36:32.359363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b53c0bfdb26f09d67565238fc0bab1069f96e529abee1340da3190eaf07d607-rootfs.mount: Deactivated successfully. May 17 01:36:32.818487 env[2382]: time="2025-05-17T01:36:32.818461952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxv7j,Uid:865bbbf9-e470-43da-b4e3-f1b9c7f9d88b,Namespace:calico-system,Attempt:0,}" May 17 01:36:32.859137 env[2382]: time="2025-05-17T01:36:32.859073668Z" level=error msg="Failed to destroy network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.859476 env[2382]: time="2025-05-17T01:36:32.859449546Z" level=error msg="encountered an error cleaning up failed sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.859511 env[2382]: time="2025-05-17T01:36:32.859491506Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxv7j,Uid:865bbbf9-e470-43da-b4e3-f1b9c7f9d88b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.859703 kubelet[3578]: E0517 01:36:32.859673 3578 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.859747 kubelet[3578]: E0517 01:36:32.859727 3578 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jxv7j" May 17 01:36:32.859776 kubelet[3578]: E0517 01:36:32.859746 3578 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jxv7j" May 17 01:36:32.859805 kubelet[3578]: E0517 01:36:32.859784 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jxv7j_calico-system(865bbbf9-e470-43da-b4e3-f1b9c7f9d88b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jxv7j_calico-system(865bbbf9-e470-43da-b4e3-f1b9c7f9d88b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jxv7j" podUID="865bbbf9-e470-43da-b4e3-f1b9c7f9d88b" May 17 01:36:32.861078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d-shm.mount: Deactivated successfully. May 17 01:36:32.868467 kubelet[3578]: I0517 01:36:32.868453 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:36:32.869010 env[2382]: time="2025-05-17T01:36:32.868984468Z" level=info msg="StopPodSandbox for \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\"" May 17 01:36:32.869070 kubelet[3578]: I0517 01:36:32.869055 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:36:32.869413 env[2382]: time="2025-05-17T01:36:32.869394266Z" level=info msg="StopPodSandbox for \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\"" May 17 01:36:32.869645 kubelet[3578]: I0517 01:36:32.869634 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:36:32.870048 env[2382]: time="2025-05-17T01:36:32.870028623Z" level=info msg="StopPodSandbox for \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\"" May 17 01:36:32.870300 kubelet[3578]: I0517 01:36:32.870290 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:36:32.870655 env[2382]: time="2025-05-17T01:36:32.870635981Z" level=info msg="StopPodSandbox for \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\"" May 17 01:36:32.870886 kubelet[3578]: I0517 01:36:32.870875 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:36:32.871251 env[2382]: time="2025-05-17T01:36:32.871228499Z" level=info msg="StopPodSandbox for \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\"" May 17 01:36:32.871521 kubelet[3578]: I0517 01:36:32.871508 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:36:32.871871 env[2382]: time="2025-05-17T01:36:32.871844696Z" level=info msg="StopPodSandbox for \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\"" May 17 01:36:32.872591 kubelet[3578]: I0517 01:36:32.872573 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:36:32.873015 env[2382]: time="2025-05-17T01:36:32.872990491Z" level=info msg="StopPodSandbox for \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\"" May 17 01:36:32.873398 kubelet[3578]: I0517 01:36:32.873381 3578 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:36:32.873803 env[2382]: time="2025-05-17T01:36:32.873780008Z" level=info msg="StopPodSandbox for \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\"" May 17 01:36:32.875906 env[2382]: time="2025-05-17T01:36:32.875871880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 01:36:32.890561 env[2382]: time="2025-05-17T01:36:32.890501260Z" level=error msg="StopPodSandbox for \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\" failed" error="failed to destroy network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.890753 kubelet[3578]: E0517 01:36:32.890714 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:36:32.891011 kubelet[3578]: E0517 01:36:32.890776 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92"} May 17 01:36:32.891011 kubelet[3578]: E0517 01:36:32.890832 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4371095-cd92-4ea8-9564-c86ac0de3064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.891011 kubelet[3578]: E0517 01:36:32.890854 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4371095-cd92-4ea8-9564-c86ac0de3064\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:36:32.891892 env[2382]: time="2025-05-17T01:36:32.891846175Z" level=error msg="StopPodSandbox for \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\" failed" error="failed to destroy network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.892011 kubelet[3578]: E0517 01:36:32.891984 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:36:32.892047 kubelet[3578]: E0517 01:36:32.892021 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c"} May 17 01:36:32.892074 kubelet[3578]: E0517 01:36:32.892047 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"90e3149c-0a7a-40ac-8123-c0855cbea15e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.892115 kubelet[3578]: E0517 01:36:32.892067 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"90e3149c-0a7a-40ac-8123-c0855cbea15e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-88b655df4-xc9hf" podUID="90e3149c-0a7a-40ac-8123-c0855cbea15e" May 17 01:36:32.892154 env[2382]: time="2025-05-17T01:36:32.892120494Z" level=error msg="StopPodSandbox for \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\" failed" error="failed to destroy network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.892264 kubelet[3578]: E0517 01:36:32.892247 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:36:32.892292 kubelet[3578]: E0517 01:36:32.892267 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde"} May 17 01:36:32.892314 kubelet[3578]: E0517 01:36:32.892294 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c4bca88a-aef7-4116-a834-44caa9bfe63e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.892358 kubelet[3578]: E0517 01:36:32.892309 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c4bca88a-aef7-4116-a834-44caa9bfe63e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54b699d664-blxw6" podUID="c4bca88a-aef7-4116-a834-44caa9bfe63e" May 17 01:36:32.893219 env[2382]: time="2025-05-17T01:36:32.893170330Z" level=error msg="StopPodSandbox for \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\" failed" error="failed to destroy network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.893275 env[2382]: time="2025-05-17T01:36:32.893187490Z" level=error msg="StopPodSandbox for \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\" failed" error="failed to destroy network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.893306 env[2382]: time="2025-05-17T01:36:32.893246249Z" level=error msg="StopPodSandbox for \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\" failed" error="failed to destroy network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.893334 kubelet[3578]: E0517 01:36:32.893301 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:36:32.893334 kubelet[3578]: E0517 01:36:32.893317 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d"} May 17 01:36:32.893388 kubelet[3578]: E0517 01:36:32.893336 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.893388 kubelet[3578]: E0517 01:36:32.893352 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jxv7j" podUID="865bbbf9-e470-43da-b4e3-f1b9c7f9d88b" May 17 01:36:32.893388 kubelet[3578]: E0517 01:36:32.893362 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:36:32.893483 kubelet[3578]: E0517 01:36:32.893366 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:36:32.893483 kubelet[3578]: E0517 01:36:32.893387 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8"} May 17 01:36:32.893483 kubelet[3578]: E0517 01:36:32.893401 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01"} May 17 01:36:32.893483 kubelet[3578]: E0517 01:36:32.893408 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.893483 kubelet[3578]: E0517 01:36:32.893426 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-sxm46" podUID="cbe86ea2-e6e9-4576-adbf-bcd51e11fb88" May 17 01:36:32.893626 kubelet[3578]: E0517 01:36:32.893429 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2f14c134-cd19-42e5-bbc4-b6bd9f685215\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.893626 kubelet[3578]: E0517 01:36:32.893460 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2f14c134-cd19-42e5-bbc4-b6bd9f685215\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-889cbc855-v2bvs" podUID="2f14c134-cd19-42e5-bbc4-b6bd9f685215" May 17 01:36:32.896173 env[2382]: time="2025-05-17T01:36:32.896134518Z" level=error msg="StopPodSandbox for \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\" failed" error="failed to destroy network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.896312 kubelet[3578]: E0517 01:36:32.896285 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:36:32.896343 kubelet[3578]: E0517 01:36:32.896319 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5"} May 17 01:36:32.896383 kubelet[3578]: E0517 01:36:32.896353 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"66187612-efc7-4292-8594-c39d74617f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.896383 kubelet[3578]: E0517 01:36:32.896374 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"66187612-efc7-4292-8594-c39d74617f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-889cbc855-tdbbg" podUID="66187612-efc7-4292-8594-c39d74617f30" May 17 01:36:32.896660 env[2382]: time="2025-05-17T01:36:32.896626756Z" level=error msg="StopPodSandbox for \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\" failed" error="failed to destroy network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 01:36:32.896762 kubelet[3578]: E0517 01:36:32.896747 3578 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:36:32.896795 kubelet[3578]: E0517 01:36:32.896765 3578 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3"} May 17 01:36:32.896795 kubelet[3578]: E0517 01:36:32.896784 3578 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4fc78cc-d2a7-4712-9751-e0c08da51294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 01:36:32.896850 kubelet[3578]: E0517 01:36:32.896801 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4fc78cc-d2a7-4712-9751-e0c08da51294\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nf2f9" podUID="a4fc78cc-d2a7-4712-9751-e0c08da51294" May 17 01:36:37.556744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295526010.mount: Deactivated successfully. May 17 01:36:37.573296 env[2382]: time="2025-05-17T01:36:37.573262258Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:37.573913 env[2382]: time="2025-05-17T01:36:37.573890016Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:37.574921 env[2382]: time="2025-05-17T01:36:37.574900173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:37.575818 env[2382]: time="2025-05-17T01:36:37.575797130Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:37.576333 env[2382]: time="2025-05-17T01:36:37.576312129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 17 01:36:37.582472 env[2382]: time="2025-05-17T01:36:37.582446391Z" level=info msg="CreateContainer within sandbox \"e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 01:36:37.587414 env[2382]: time="2025-05-17T01:36:37.587372296Z" level=info msg="CreateContainer within sandbox \"e26ace54e5182ca0cea4fce43501917ed6091b24ac35c4dfe9e62e7b4ce15f50\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b9ea8be57e567994c125e013c6a1328ec187984ff82576da15049a9360e80fea\"" May 17 01:36:37.587724 env[2382]: time="2025-05-17T01:36:37.587701695Z" level=info msg="StartContainer for \"b9ea8be57e567994c125e013c6a1328ec187984ff82576da15049a9360e80fea\"" May 17 01:36:37.630313 env[2382]: time="2025-05-17T01:36:37.630279650Z" level=info msg="StartContainer for \"b9ea8be57e567994c125e013c6a1328ec187984ff82576da15049a9360e80fea\" returns successfully" May 17 01:36:37.783940 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 01:36:37.784004 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 01:36:37.840757 env[2382]: time="2025-05-17T01:36:37.840678912Z" level=info msg="StopPodSandbox for \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\"" May 17 01:36:37.893661 kubelet[3578]: I0517 01:36:37.893612 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9kj5p" podStartSLOduration=0.768853263 podStartE2EDuration="12.893596317s" podCreationTimestamp="2025-05-17 01:36:25 +0000 UTC" firstStartedPulling="2025-05-17 01:36:25.452194953 +0000 UTC m=+19.706559274" lastFinishedPulling="2025-05-17 01:36:37.576938007 +0000 UTC m=+31.831302328" observedRunningTime="2025-05-17 01:36:37.893471998 +0000 UTC m=+32.147836319" watchObservedRunningTime="2025-05-17 01:36:37.893596317 +0000 UTC m=+32.147960638" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.880 [INFO][5592] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.880 [INFO][5592] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" iface="eth0" netns="/var/run/netns/cni-1977331a-5669-382b-6735-600034189c91" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.881 [INFO][5592] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" iface="eth0" netns="/var/run/netns/cni-1977331a-5669-382b-6735-600034189c91" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.881 [INFO][5592] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" iface="eth0" netns="/var/run/netns/cni-1977331a-5669-382b-6735-600034189c91" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.881 [INFO][5592] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.881 [INFO][5592] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.913 [INFO][5623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.913 [INFO][5623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.914 [INFO][5623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.921 [WARNING][5623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.921 [INFO][5623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.922 [INFO][5623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:37.925999 env[2382]: 2025-05-17 01:36:37.924 [INFO][5592] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:36:37.926342 env[2382]: time="2025-05-17T01:36:37.926151662Z" level=info msg="TearDown network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\" successfully" May 17 01:36:37.926342 env[2382]: time="2025-05-17T01:36:37.926178221Z" level=info msg="StopPodSandbox for \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\" returns successfully" May 17 01:36:38.127117 kubelet[3578]: I0517 01:36:38.127079 3578 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-backend-key-pair\") pod \"90e3149c-0a7a-40ac-8123-c0855cbea15e\" (UID: \"90e3149c-0a7a-40ac-8123-c0855cbea15e\") " May 17 01:36:38.127221 kubelet[3578]: I0517 01:36:38.127153 3578 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zsj7\" (UniqueName: \"kubernetes.io/projected/90e3149c-0a7a-40ac-8123-c0855cbea15e-kube-api-access-5zsj7\") pod \"90e3149c-0a7a-40ac-8123-c0855cbea15e\" (UID: \"90e3149c-0a7a-40ac-8123-c0855cbea15e\") " May 17 01:36:38.127221 kubelet[3578]: I0517 01:36:38.127198 3578 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-ca-bundle\") pod \"90e3149c-0a7a-40ac-8123-c0855cbea15e\" (UID: \"90e3149c-0a7a-40ac-8123-c0855cbea15e\") " May 17 01:36:38.127590 kubelet[3578]: I0517 01:36:38.127573 3578 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "90e3149c-0a7a-40ac-8123-c0855cbea15e" (UID: "90e3149c-0a7a-40ac-8123-c0855cbea15e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 17 01:36:38.129778 kubelet[3578]: I0517 01:36:38.129756 3578 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e3149c-0a7a-40ac-8123-c0855cbea15e-kube-api-access-5zsj7" (OuterVolumeSpecName: "kube-api-access-5zsj7") pod "90e3149c-0a7a-40ac-8123-c0855cbea15e" (UID: "90e3149c-0a7a-40ac-8123-c0855cbea15e"). InnerVolumeSpecName "kube-api-access-5zsj7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 17 01:36:38.129847 kubelet[3578]: I0517 01:36:38.129824 3578 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "90e3149c-0a7a-40ac-8123-c0855cbea15e" (UID: "90e3149c-0a7a-40ac-8123-c0855cbea15e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 17 01:36:38.228132 kubelet[3578]: I0517 01:36:38.228106 3578 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-backend-key-pair\") on node \"ci-3510.3.7-n-8226148b53\" DevicePath \"\"" May 17 01:36:38.228132 kubelet[3578]: I0517 01:36:38.228128 3578 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/90e3149c-0a7a-40ac-8123-c0855cbea15e-whisker-ca-bundle\") on node \"ci-3510.3.7-n-8226148b53\" DevicePath \"\"" May 17 01:36:38.228236 kubelet[3578]: I0517 01:36:38.228139 3578 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zsj7\" (UniqueName: \"kubernetes.io/projected/90e3149c-0a7a-40ac-8123-c0855cbea15e-kube-api-access-5zsj7\") on node \"ci-3510.3.7-n-8226148b53\" DevicePath \"\"" May 17 01:36:38.557689 systemd[1]: run-netns-cni\x2d1977331a\x2d5669\x2d382b\x2d6735\x2d600034189c91.mount: Deactivated successfully. May 17 01:36:38.557813 systemd[1]: var-lib-kubelet-pods-90e3149c\x2d0a7a\x2d40ac\x2d8123\x2dc0855cbea15e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5zsj7.mount: Deactivated successfully. May 17 01:36:38.557913 systemd[1]: var-lib-kubelet-pods-90e3149c\x2d0a7a\x2d40ac\x2d8123\x2dc0855cbea15e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 17 01:36:38.884951 kubelet[3578]: I0517 01:36:38.884930 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:36:39.031890 kubelet[3578]: I0517 01:36:39.031851 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34828fa1-51fa-4503-8a6b-b736774334d2-whisker-backend-key-pair\") pod \"whisker-7cfb685fbd-76hs6\" (UID: \"34828fa1-51fa-4503-8a6b-b736774334d2\") " pod="calico-system/whisker-7cfb685fbd-76hs6" May 17 01:36:39.031890 kubelet[3578]: I0517 01:36:39.031896 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnbsj\" (UniqueName: \"kubernetes.io/projected/34828fa1-51fa-4503-8a6b-b736774334d2-kube-api-access-nnbsj\") pod \"whisker-7cfb685fbd-76hs6\" (UID: \"34828fa1-51fa-4503-8a6b-b736774334d2\") " pod="calico-system/whisker-7cfb685fbd-76hs6" May 17 01:36:39.032240 kubelet[3578]: I0517 01:36:39.031933 3578 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34828fa1-51fa-4503-8a6b-b736774334d2-whisker-ca-bundle\") pod \"whisker-7cfb685fbd-76hs6\" (UID: \"34828fa1-51fa-4503-8a6b-b736774334d2\") " pod="calico-system/whisker-7cfb685fbd-76hs6" May 17 01:36:39.051000 audit[5756]: AVC avc: denied { write } for pid=5756 comm="tee" name="fd" dev="proc" ino=31684 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.051000 audit[5757]: AVC avc: denied { write } for pid=5757 comm="tee" name="fd" dev="proc" ino=94195 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.135740 kernel: audit: type=1400 audit(1747445799.051:281): avc: denied { write } for pid=5756 comm="tee" name="fd" dev="proc" ino=31684 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.135846 kernel: audit: type=1400 audit(1747445799.051:282): avc: denied { write } for pid=5757 comm="tee" name="fd" dev="proc" ino=94195 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.135871 kernel: audit: type=1300 audit(1747445799.051:281): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe5ce77d5 a2=241 a3=1b6 items=1 ppid=5721 pid=5756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.051000 audit[5756]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe5ce77d5 a2=241 a3=1b6 items=1 ppid=5721 pid=5756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.051000 audit[5757]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff7e647d3 a2=241 a3=1b6 items=1 ppid=5723 pid=5757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.212402 env[2382]: time="2025-05-17T01:36:39.212372984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cfb685fbd-76hs6,Uid:34828fa1-51fa-4503-8a6b-b736774334d2,Namespace:calico-system,Attempt:0,}" May 17 01:36:39.247203 kernel: audit: type=1300 audit(1747445799.051:282): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff7e647d3 a2=241 a3=1b6 items=1 ppid=5723 pid=5757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.247279 kernel: audit: type=1307 audit(1747445799.051:281): cwd="/etc/service/enabled/cni/log" May 17 01:36:39.051000 audit: CWD cwd="/etc/service/enabled/cni/log" May 17 01:36:39.051000 audit: CWD cwd="/etc/service/enabled/bird6/log" May 17 01:36:39.276589 kernel: audit: type=1307 audit(1747445799.051:282): cwd="/etc/service/enabled/bird6/log" May 17 01:36:39.276659 kernel: audit: type=1302 audit(1747445799.051:281): item=0 name="/dev/fd/63" inode=31861 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.051000 audit: PATH item=0 name="/dev/fd/63" inode=31861 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.051000 audit: PATH item=0 name="/dev/fd/63" inode=40086 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.354831 kernel: audit: type=1302 audit(1747445799.051:282): item=0 name="/dev/fd/63" inode=40086 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.354927 kernel: audit: type=1327 audit(1747445799.051:281): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.360439 systemd-networkd[2041]: cali930030f60c9: Link UP May 17 01:36:39.392474 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 01:36:39.392500 kernel: audit: type=1327 audit(1747445799.051:282): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.440548 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali930030f60c9: link becomes ready May 17 01:36:39.051000 audit[5759]: AVC avc: denied { write } for pid=5759 comm="tee" name="fd" dev="proc" ino=59506 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.051000 audit[5760]: AVC avc: denied { write } for pid=5760 comm="tee" name="fd" dev="proc" ino=40089 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.051000 audit[5759]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff47607d3 a2=241 a3=1b6 items=1 ppid=5720 pid=5759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.051000 audit: CWD cwd="/etc/service/enabled/felix/log" May 17 01:36:39.051000 audit: PATH item=0 name="/dev/fd/63" inode=18495 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.051000 audit[5760]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffee1547c4 a2=241 a3=1b6 items=1 ppid=5724 pid=5760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.051000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" May 17 01:36:39.051000 audit: PATH item=0 name="/dev/fd/63" inode=45333 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.051000 audit[5767]: AVC avc: denied { write } for pid=5767 comm="tee" name="fd" dev="proc" ino=71772 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.051000 audit[5767]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffeefc67c3 a2=241 a3=1b6 items=1 ppid=5725 pid=5767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.051000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" May 17 01:36:39.051000 audit: PATH item=0 name="/dev/fd/63" inode=49470 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.051000 audit[5766]: AVC avc: denied { write } for pid=5766 comm="tee" name="fd" dev="proc" ino=51956 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.051000 audit[5766]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffdd5a7d4 a2=241 a3=1b6 items=1 ppid=5728 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.051000 audit: CWD cwd="/etc/service/enabled/bird/log" May 17 01:36:39.051000 audit: PATH item=0 name="/dev/fd/63" inode=61552 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.051000 audit[5768]: AVC avc: denied { write } for pid=5768 comm="tee" name="fd" dev="proc" ino=43298 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 May 17 01:36:39.051000 audit[5768]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd49187d3 a2=241 a3=1b6 items=1 ppid=5727 pid=5768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.051000 audit: CWD cwd="/etc/service/enabled/confd/log" May 17 01:36:39.051000 audit: PATH item=0 name="/dev/fd/63" inode=85277 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 01:36:39.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 May 17 01:36:39.454060 systemd-networkd[2041]: cali930030f60c9: Gained carrier May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.234 [INFO][5872] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.245 [INFO][5872] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0 whisker-7cfb685fbd- calico-system 34828fa1-51fa-4503-8a6b-b736774334d2 875 0 2025-05-17 01:36:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7cfb685fbd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 whisker-7cfb685fbd-76hs6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali930030f60c9 [] [] }} ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.245 [INFO][5872] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.269 [INFO][5896] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" HandleID="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.270 [INFO][5896] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" HandleID="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006d02b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-8226148b53", "pod":"whisker-7cfb685fbd-76hs6", "timestamp":"2025-05-17 01:36:39.269870516 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.270 [INFO][5896] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.270 [INFO][5896] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.270 [INFO][5896] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.279 [INFO][5896] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.283 [INFO][5896] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.286 [INFO][5896] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.288 [INFO][5896] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.289 [INFO][5896] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.289 [INFO][5896] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.291 [INFO][5896] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.293 [INFO][5896] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.296 [INFO][5896] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.1/26] block=192.168.63.0/26 handle="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.296 [INFO][5896] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.1/26] handle="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.296 [INFO][5896] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:39.462029 env[2382]: 2025-05-17 01:36:39.296 [INFO][5896] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.1/26] IPv6=[] ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" HandleID="k8s-pod-network.70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" May 17 01:36:39.462493 env[2382]: 2025-05-17 01:36:39.298 [INFO][5872] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0", GenerateName:"whisker-7cfb685fbd-", Namespace:"calico-system", SelfLink:"", UID:"34828fa1-51fa-4503-8a6b-b736774334d2", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cfb685fbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"whisker-7cfb685fbd-76hs6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali930030f60c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:39.462493 env[2382]: 2025-05-17 01:36:39.298 [INFO][5872] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.1/32] ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" May 17 01:36:39.462493 env[2382]: 2025-05-17 01:36:39.298 [INFO][5872] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali930030f60c9 ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" May 17 01:36:39.462493 env[2382]: 2025-05-17 01:36:39.453 [INFO][5872] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" May 17 01:36:39.462493 env[2382]: 2025-05-17 01:36:39.454 [INFO][5872] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0", GenerateName:"whisker-7cfb685fbd-", Namespace:"calico-system", SelfLink:"", UID:"34828fa1-51fa-4503-8a6b-b736774334d2", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7cfb685fbd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b", Pod:"whisker-7cfb685fbd-76hs6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.63.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali930030f60c9", MAC:"16:49:ed:da:a8:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:39.462493 env[2382]: 2025-05-17 01:36:39.460 [INFO][5872] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b" Namespace="calico-system" Pod="whisker-7cfb685fbd-76hs6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--7cfb685fbd--76hs6-eth0" May 17 01:36:39.470765 env[2382]: time="2025-05-17T01:36:39.470714878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:39.470765 env[2382]: time="2025-05-17T01:36:39.470752358Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:39.470765 env[2382]: time="2025-05-17T01:36:39.470763038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:39.470931 env[2382]: time="2025-05-17T01:36:39.470904797Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b pid=5929 runtime=io.containerd.runc.v2 May 17 01:36:39.508076 env[2382]: time="2025-05-17T01:36:39.508012741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7cfb685fbd-76hs6,Uid:34828fa1-51fa-4503-8a6b-b736774334d2,Namespace:calico-system,Attempt:0,} returns sandbox id \"70c5d5791999d002635fcfeb4ef8b5596068709de1561d3e06210435a1b4a24b\"" May 17 01:36:39.509214 env[2382]: time="2025-05-17T01:36:39.509187218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:36:39.669925 env[2382]: time="2025-05-17T01:36:39.669805604Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:36:39.670243 env[2382]: time="2025-05-17T01:36:39.670207123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:36:39.670495 kubelet[3578]: E0517 01:36:39.670461 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:36:39.670552 kubelet[3578]: E0517 01:36:39.670506 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:36:39.670643 kubelet[3578]: E0517 01:36:39.670615 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:98d2fa92e9aa4ed787c6df8c04195565,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:36:39.672223 env[2382]: time="2025-05-17T01:36:39.672200398Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:36:39.817882 kubelet[3578]: I0517 01:36:39.817845 3578 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e3149c-0a7a-40ac-8123-c0855cbea15e" path="/var/lib/kubelet/pods/90e3149c-0a7a-40ac-8123-c0855cbea15e/volumes" May 17 01:36:39.830336 env[2382]: time="2025-05-17T01:36:39.830298150Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:36:39.830563 env[2382]: time="2025-05-17T01:36:39.830540789Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:36:39.830689 kubelet[3578]: E0517 01:36:39.830663 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:36:39.830716 kubelet[3578]: E0517 01:36:39.830699 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:36:39.830869 kubelet[3578]: E0517 01:36:39.830807 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:36:39.831996 kubelet[3578]: E0517 01:36:39.831970 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:36:39.887298 kubelet[3578]: E0517 01:36:39.887272 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:36:39.898000 audit[5971]: NETFILTER_CFG table=filter:99 family=2 entries=22 op=nft_register_rule pid=5971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:39.898000 audit[5971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc47bed30 a2=0 a3=1 items=0 ppid=3808 pid=5971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:39.907000 audit[5971]: NETFILTER_CFG table=nat:100 family=2 entries=12 op=nft_register_rule pid=5971 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:39.907000 audit[5971]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc47bed30 a2=0 a3=1 items=0 ppid=3808 pid=5971 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:39.907000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:40.677962 systemd-networkd[2041]: cali930030f60c9: Gained IPv6LL May 17 01:36:40.888868 kubelet[3578]: E0517 01:36:40.888831 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:36:43.816496 env[2382]: time="2025-05-17T01:36:43.816440351Z" level=info msg="StopPodSandbox for \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\"" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.851 [INFO][6223] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.851 [INFO][6223] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" iface="eth0" netns="/var/run/netns/cni-e9aa2ecf-f268-692d-ead8-13e73a80be2e" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.851 [INFO][6223] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" iface="eth0" netns="/var/run/netns/cni-e9aa2ecf-f268-692d-ead8-13e73a80be2e" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.852 [INFO][6223] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" iface="eth0" netns="/var/run/netns/cni-e9aa2ecf-f268-692d-ead8-13e73a80be2e" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.852 [INFO][6223] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.852 [INFO][6223] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.869 [INFO][6242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.869 [INFO][6242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.869 [INFO][6242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.877 [WARNING][6242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.877 [INFO][6242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.880 [INFO][6242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:43.883404 env[2382]: 2025-05-17 01:36:43.881 [INFO][6223] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:36:43.883837 env[2382]: time="2025-05-17T01:36:43.883562418Z" level=info msg="TearDown network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\" successfully" May 17 01:36:43.883837 env[2382]: time="2025-05-17T01:36:43.883589298Z" level=info msg="StopPodSandbox for \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\" returns successfully" May 17 01:36:43.884082 env[2382]: time="2025-05-17T01:36:43.884052377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-v2bvs,Uid:2f14c134-cd19-42e5-bbc4-b6bd9f685215,Namespace:calico-apiserver,Attempt:1,}" May 17 01:36:43.885672 systemd[1]: run-netns-cni\x2de9aa2ecf\x2df268\x2d692d\x2dead8\x2d13e73a80be2e.mount: Deactivated successfully. May 17 01:36:43.964287 systemd-networkd[2041]: cali3f821384a61: Link UP May 17 01:36:43.989648 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 01:36:43.989730 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3f821384a61: link becomes ready May 17 01:36:43.989720 systemd-networkd[2041]: cali3f821384a61: Gained carrier May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.904 [INFO][6263] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.916 [INFO][6263] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0 calico-apiserver-889cbc855- calico-apiserver 2f14c134-cd19-42e5-bbc4-b6bd9f685215 912 0 2025-05-17 01:36:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:889cbc855 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 calico-apiserver-889cbc855-v2bvs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3f821384a61 [] [] }} ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.916 [INFO][6263] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.936 [INFO][6288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" HandleID="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.937 [INFO][6288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" HandleID="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cf30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-n-8226148b53", "pod":"calico-apiserver-889cbc855-v2bvs", "timestamp":"2025-05-17 01:36:43.936978671 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.937 [INFO][6288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.937 [INFO][6288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.937 [INFO][6288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.945 [INFO][6288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.948 [INFO][6288] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.951 [INFO][6288] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.952 [INFO][6288] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.954 [INFO][6288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.954 [INFO][6288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.955 [INFO][6288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9 May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.957 [INFO][6288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.961 [INFO][6288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.2/26] block=192.168.63.0/26 handle="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.961 [INFO][6288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.2/26] handle="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" host="ci-3510.3.7-n-8226148b53" May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.961 [INFO][6288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:43.997348 env[2382]: 2025-05-17 01:36:43.961 [INFO][6288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.2/26] IPv6=[] ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" HandleID="k8s-pod-network.5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.997871 env[2382]: 2025-05-17 01:36:43.963 [INFO][6263] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f14c134-cd19-42e5-bbc4-b6bd9f685215", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"calico-apiserver-889cbc855-v2bvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f821384a61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:43.997871 env[2382]: 2025-05-17 01:36:43.963 [INFO][6263] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.2/32] ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.997871 env[2382]: 2025-05-17 01:36:43.963 [INFO][6263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f821384a61 ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.997871 env[2382]: 2025-05-17 01:36:43.989 [INFO][6263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:43.997871 env[2382]: 2025-05-17 01:36:43.990 [INFO][6263] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f14c134-cd19-42e5-bbc4-b6bd9f685215", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9", Pod:"calico-apiserver-889cbc855-v2bvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f821384a61", MAC:"b2:4e:a3:d7:20:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:43.997871 env[2382]: 2025-05-17 01:36:43.995 [INFO][6263] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-v2bvs" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:36:44.005149 env[2382]: time="2025-05-17T01:36:44.005102976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:44.005149 env[2382]: time="2025-05-17T01:36:44.005138936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:44.005219 env[2382]: time="2025-05-17T01:36:44.005149016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:44.005323 env[2382]: time="2025-05-17T01:36:44.005297016Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9 pid=6322 runtime=io.containerd.runc.v2 May 17 01:36:44.042619 env[2382]: time="2025-05-17T01:36:44.042581626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-v2bvs,Uid:2f14c134-cd19-42e5-bbc4-b6bd9f685215,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9\"" May 17 01:36:44.043690 env[2382]: time="2025-05-17T01:36:44.043672464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 01:36:44.816128 env[2382]: time="2025-05-17T01:36:44.816093061Z" level=info msg="StopPodSandbox for \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\"" May 17 01:36:44.816315 env[2382]: time="2025-05-17T01:36:44.816101300Z" level=info msg="StopPodSandbox for \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\"" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.851 [INFO][6430] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.851 [INFO][6430] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" iface="eth0" netns="/var/run/netns/cni-66fa337c-4a55-ca34-8043-cb4df0d736c3" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.852 [INFO][6430] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" iface="eth0" netns="/var/run/netns/cni-66fa337c-4a55-ca34-8043-cb4df0d736c3" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.852 [INFO][6430] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" iface="eth0" netns="/var/run/netns/cni-66fa337c-4a55-ca34-8043-cb4df0d736c3" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.852 [INFO][6430] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.852 [INFO][6430] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.869 [INFO][6469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.869 [INFO][6469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.869 [INFO][6469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.876 [WARNING][6469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.876 [INFO][6469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.877 [INFO][6469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:44.880042 env[2382]: 2025-05-17 01:36:44.878 [INFO][6430] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:36:44.880740 env[2382]: time="2025-05-17T01:36:44.880705660Z" level=info msg="TearDown network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\" successfully" May 17 01:36:44.880768 env[2382]: time="2025-05-17T01:36:44.880739980Z" level=info msg="StopPodSandbox for \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\" returns successfully" May 17 01:36:44.881361 env[2382]: time="2025-05-17T01:36:44.881342179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-tdbbg,Uid:66187612-efc7-4292-8594-c39d74617f30,Namespace:calico-apiserver,Attempt:1,}" May 17 01:36:44.886445 systemd[1]: run-netns-cni\x2d66fa337c\x2d4a55\x2dca34\x2d8043\x2dcb4df0d736c3.mount: Deactivated successfully. May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.851 [INFO][6429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.851 [INFO][6429] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" iface="eth0" netns="/var/run/netns/cni-1cb0f47c-a8b3-22a2-6ce7-736b358c2742" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.851 [INFO][6429] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" iface="eth0" netns="/var/run/netns/cni-1cb0f47c-a8b3-22a2-6ce7-736b358c2742" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.851 [INFO][6429] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" iface="eth0" netns="/var/run/netns/cni-1cb0f47c-a8b3-22a2-6ce7-736b358c2742" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.851 [INFO][6429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.851 [INFO][6429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.869 [INFO][6467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.869 [INFO][6467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.877 [INFO][6467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.886 [WARNING][6467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.886 [INFO][6467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.887 [INFO][6467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:44.891582 env[2382]: 2025-05-17 01:36:44.888 [INFO][6429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:36:44.891921 env[2382]: time="2025-05-17T01:36:44.891737359Z" level=info msg="TearDown network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\" successfully" May 17 01:36:44.891921 env[2382]: time="2025-05-17T01:36:44.891768399Z" level=info msg="StopPodSandbox for \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\" returns successfully" May 17 01:36:44.892311 env[2382]: time="2025-05-17T01:36:44.892285238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-czxt2,Uid:c4371095-cd92-4ea8-9564-c86ac0de3064,Namespace:calico-system,Attempt:1,}" May 17 01:36:44.893655 systemd[1]: run-netns-cni\x2d1cb0f47c\x2da8b3\x2d22a2\x2d6ce7\x2d736b358c2742.mount: Deactivated successfully. May 17 01:36:44.963990 systemd-networkd[2041]: calib6fbd1a473b: Link UP May 17 01:36:44.989034 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 01:36:44.989108 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib6fbd1a473b: link becomes ready May 17 01:36:44.989332 systemd-networkd[2041]: calib6fbd1a473b: Gained carrier May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.903 [INFO][6509] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.914 [INFO][6509] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0 calico-apiserver-889cbc855- calico-apiserver 66187612-efc7-4292-8594-c39d74617f30 922 0 2025-05-17 01:36:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:889cbc855 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 calico-apiserver-889cbc855-tdbbg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib6fbd1a473b [] [] }} ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.914 [INFO][6509] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.937 [INFO][6557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" HandleID="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.937 [INFO][6557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" HandleID="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000367e50), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3510.3.7-n-8226148b53", "pod":"calico-apiserver-889cbc855-tdbbg", "timestamp":"2025-05-17 01:36:44.937146314 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.937 [INFO][6557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.937 [INFO][6557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.937 [INFO][6557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.945 [INFO][6557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.948 [INFO][6557] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.951 [INFO][6557] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.952 [INFO][6557] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.954 [INFO][6557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.954 [INFO][6557] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.955 [INFO][6557] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565 May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.957 [INFO][6557] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.961 [INFO][6557] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.3/26] block=192.168.63.0/26 handle="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.961 [INFO][6557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.3/26] handle="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" host="ci-3510.3.7-n-8226148b53" May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.961 [INFO][6557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:44.998486 env[2382]: 2025-05-17 01:36:44.961 [INFO][6557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.3/26] IPv6=[] ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" HandleID="k8s-pod-network.25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.998950 env[2382]: 2025-05-17 01:36:44.962 [INFO][6509] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"66187612-efc7-4292-8594-c39d74617f30", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"calico-apiserver-889cbc855-tdbbg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6fbd1a473b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:44.998950 env[2382]: 2025-05-17 01:36:44.962 [INFO][6509] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.3/32] ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.998950 env[2382]: 2025-05-17 01:36:44.962 [INFO][6509] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6fbd1a473b ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.998950 env[2382]: 2025-05-17 01:36:44.989 [INFO][6509] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:44.998950 env[2382]: 2025-05-17 01:36:44.989 [INFO][6509] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"66187612-efc7-4292-8594-c39d74617f30", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565", Pod:"calico-apiserver-889cbc855-tdbbg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6fbd1a473b", MAC:"ce:1d:a9:9a:f0:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:44.998950 env[2382]: 2025-05-17 01:36:44.997 [INFO][6509] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565" Namespace="calico-apiserver" Pod="calico-apiserver-889cbc855-tdbbg" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:36:45.006088 env[2382]: time="2025-05-17T01:36:45.006040426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:45.006088 env[2382]: time="2025-05-17T01:36:45.006076266Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:45.006088 env[2382]: time="2025-05-17T01:36:45.006086706Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:45.006231 env[2382]: time="2025-05-17T01:36:45.006203506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565 pid=6615 runtime=io.containerd.runc.v2 May 17 01:36:45.044326 env[2382]: time="2025-05-17T01:36:45.044279679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-889cbc855-tdbbg,Uid:66187612-efc7-4292-8594-c39d74617f30,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565\"" May 17 01:36:45.067174 systemd-networkd[2041]: cali2092006e175: Link UP May 17 01:36:45.080787 systemd-networkd[2041]: cali2092006e175: Gained carrier May 17 01:36:45.080866 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2092006e175: link becomes ready May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.912 [INFO][6529] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.922 [INFO][6529] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0 goldmane-8f77d7b6c- calico-system c4371095-cd92-4ea8-9564-c86ac0de3064 923 0 2025-05-17 01:36:25 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 goldmane-8f77d7b6c-czxt2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2092006e175 [] [] }} ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.922 [INFO][6529] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.942 [INFO][6563] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" HandleID="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.942 [INFO][6563] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" HandleID="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003df6c0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-8226148b53", "pod":"goldmane-8f77d7b6c-czxt2", "timestamp":"2025-05-17 01:36:44.942088585 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.942 [INFO][6563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.961 [INFO][6563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:44.961 [INFO][6563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.046 [INFO][6563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.049 [INFO][6563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.052 [INFO][6563] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.054 [INFO][6563] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.055 [INFO][6563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.055 [INFO][6563] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.056 [INFO][6563] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7 May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.059 [INFO][6563] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.063 [INFO][6563] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.4/26] block=192.168.63.0/26 handle="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.063 [INFO][6563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.4/26] handle="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.063 [INFO][6563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:45.088734 env[2382]: 2025-05-17 01:36:45.063 [INFO][6563] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.4/26] IPv6=[] ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" HandleID="k8s-pod-network.25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:45.089192 env[2382]: 2025-05-17 01:36:45.065 [INFO][6529] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c4371095-cd92-4ea8-9564-c86ac0de3064", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"goldmane-8f77d7b6c-czxt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2092006e175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:45.089192 env[2382]: 2025-05-17 01:36:45.065 [INFO][6529] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.4/32] ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:45.089192 env[2382]: 2025-05-17 01:36:45.066 [INFO][6529] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2092006e175 ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:45.089192 env[2382]: 2025-05-17 01:36:45.080 [INFO][6529] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:45.089192 env[2382]: 2025-05-17 01:36:45.080 [INFO][6529] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c4371095-cd92-4ea8-9564-c86ac0de3064", ResourceVersion:"923", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7", Pod:"goldmane-8f77d7b6c-czxt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2092006e175", MAC:"2e:2b:93:df:d9:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:45.089192 env[2382]: 2025-05-17 01:36:45.087 [INFO][6529] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7" Namespace="calico-system" Pod="goldmane-8f77d7b6c-czxt2" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:36:45.096147 env[2382]: time="2025-05-17T01:36:45.096103268Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:45.096147 env[2382]: time="2025-05-17T01:36:45.096140988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:45.096195 env[2382]: time="2025-05-17T01:36:45.096150868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:45.097606 env[2382]: time="2025-05-17T01:36:45.096336068Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7 pid=6668 runtime=io.containerd.runc.v2 May 17 01:36:45.134028 env[2382]: time="2025-05-17T01:36:45.133991042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-czxt2,Uid:c4371095-cd92-4ea8-9564-c86ac0de3064,Namespace:calico-system,Attempt:1,} returns sandbox id \"25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7\"" May 17 01:36:45.816911 env[2382]: time="2025-05-17T01:36:45.816870446Z" level=info msg="StopPodSandbox for \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\"" May 17 01:36:45.860552 env[2382]: time="2025-05-17T01:36:45.860523809Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:45.861293 env[2382]: time="2025-05-17T01:36:45.861269688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:45.862403 env[2382]: time="2025-05-17T01:36:45.862382526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:45.863409 env[2382]: time="2025-05-17T01:36:45.863390644Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:45.863930 env[2382]: time="2025-05-17T01:36:45.863911843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 01:36:45.864816 env[2382]: time="2025-05-17T01:36:45.864794162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 17 01:36:45.865571 env[2382]: time="2025-05-17T01:36:45.865551721Z" level=info msg="CreateContainer within sandbox \"5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 01:36:45.870680 env[2382]: time="2025-05-17T01:36:45.870644192Z" level=info msg="CreateContainer within sandbox \"5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"015a1661128433d48f0fcf29fb392363e39f01a3a21455687db8e65f7dabe1ed\"" May 17 01:36:45.871072 env[2382]: time="2025-05-17T01:36:45.871052191Z" level=info msg="StartContainer for \"015a1661128433d48f0fcf29fb392363e39f01a3a21455687db8e65f7dabe1ed\"" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.852 [INFO][6762] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.852 [INFO][6762] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" iface="eth0" netns="/var/run/netns/cni-e6874f3d-249c-5425-b9d9-74ce45ba7e41" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.852 [INFO][6762] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" iface="eth0" netns="/var/run/netns/cni-e6874f3d-249c-5425-b9d9-74ce45ba7e41" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.852 [INFO][6762] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" iface="eth0" netns="/var/run/netns/cni-e6874f3d-249c-5425-b9d9-74ce45ba7e41" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.852 [INFO][6762] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.852 [INFO][6762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.869 [INFO][6780] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.869 [INFO][6780] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.869 [INFO][6780] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.876 [WARNING][6780] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.876 [INFO][6780] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.877 [INFO][6780] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:45.880262 env[2382]: 2025-05-17 01:36:45.878 [INFO][6762] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:36:45.880729 env[2382]: time="2025-05-17T01:36:45.880397775Z" level=info msg="TearDown network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\" successfully" May 17 01:36:45.880729 env[2382]: time="2025-05-17T01:36:45.880424014Z" level=info msg="StopPodSandbox for \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\" returns successfully" May 17 01:36:45.880877 env[2382]: time="2025-05-17T01:36:45.880849414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sxm46,Uid:cbe86ea2-e6e9-4576-adbf-bcd51e11fb88,Namespace:kube-system,Attempt:1,}" May 17 01:36:45.887459 systemd[1]: run-netns-cni\x2de6874f3d\x2d249c\x2d5425\x2db9d9\x2d74ce45ba7e41.mount: Deactivated successfully. May 17 01:36:45.915318 env[2382]: time="2025-05-17T01:36:45.915279393Z" level=info msg="StartContainer for \"015a1661128433d48f0fcf29fb392363e39f01a3a21455687db8e65f7dabe1ed\" returns successfully" May 17 01:36:45.963468 systemd-networkd[2041]: cali43b8052e3ad: Link UP May 17 01:36:45.976490 systemd-networkd[2041]: cali43b8052e3ad: Gained carrier May 17 01:36:45.976871 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali43b8052e3ad: link becomes ready May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.902 [INFO][6818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.913 [INFO][6818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0 coredns-7c65d6cfc9- kube-system cbe86ea2-e6e9-4576-adbf-bcd51e11fb88 935 0 2025-05-17 01:36:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 coredns-7c65d6cfc9-sxm46 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43b8052e3ad [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.913 [INFO][6818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.935 [INFO][6862] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" HandleID="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.935 [INFO][6862] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" HandleID="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005167a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-n-8226148b53", "pod":"coredns-7c65d6cfc9-sxm46", "timestamp":"2025-05-17 01:36:45.935373518 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.935 [INFO][6862] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.935 [INFO][6862] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.935 [INFO][6862] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.943 [INFO][6862] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.946 [INFO][6862] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.949 [INFO][6862] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.950 [INFO][6862] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.952 [INFO][6862] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.952 [INFO][6862] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.952 [INFO][6862] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.955 [INFO][6862] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.960 [INFO][6862] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.5/26] block=192.168.63.0/26 handle="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.960 [INFO][6862] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.5/26] handle="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" host="ci-3510.3.7-n-8226148b53" May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.960 [INFO][6862] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:45.984998 env[2382]: 2025-05-17 01:36:45.960 [INFO][6862] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.5/26] IPv6=[] ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" HandleID="k8s-pod-network.2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.985530 env[2382]: 2025-05-17 01:36:45.961 [INFO][6818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"coredns-7c65d6cfc9-sxm46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8052e3ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:45.985530 env[2382]: 2025-05-17 01:36:45.961 [INFO][6818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.5/32] ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.985530 env[2382]: 2025-05-17 01:36:45.961 [INFO][6818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43b8052e3ad ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.985530 env[2382]: 2025-05-17 01:36:45.976 [INFO][6818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.985530 env[2382]: 2025-05-17 01:36:45.976 [INFO][6818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b", Pod:"coredns-7c65d6cfc9-sxm46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8052e3ad", MAC:"ce:8a:74:d7:be:62", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:45.985530 env[2382]: 2025-05-17 01:36:45.982 [INFO][6818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-sxm46" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:36:45.990942 systemd-networkd[2041]: cali3f821384a61: Gained IPv6LL May 17 01:36:45.995053 env[2382]: time="2025-05-17T01:36:45.995004414Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:45.995053 env[2382]: time="2025-05-17T01:36:45.995040374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:45.995104 env[2382]: time="2025-05-17T01:36:45.995056014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:45.995191 env[2382]: time="2025-05-17T01:36:45.995173773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b pid=6926 runtime=io.containerd.runc.v2 May 17 01:36:46.032439 env[2382]: time="2025-05-17T01:36:46.032406752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-sxm46,Uid:cbe86ea2-e6e9-4576-adbf-bcd51e11fb88,Namespace:kube-system,Attempt:1,} returns sandbox id \"2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b\"" May 17 01:36:46.034343 env[2382]: time="2025-05-17T01:36:46.034320829Z" level=info msg="CreateContainer within sandbox \"2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:36:46.038709 env[2382]: time="2025-05-17T01:36:46.038683501Z" level=info msg="CreateContainer within sandbox \"2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f15741efc31bff737d5eca1b71a9f9093bcb8a5e451dc6096724fc3bb6dfa88\"" May 17 01:36:46.039084 env[2382]: time="2025-05-17T01:36:46.039063581Z" level=info msg="StartContainer for \"5f15741efc31bff737d5eca1b71a9f9093bcb8a5e451dc6096724fc3bb6dfa88\"" May 17 01:36:46.075976 env[2382]: time="2025-05-17T01:36:46.075908480Z" level=info msg="StartContainer for \"5f15741efc31bff737d5eca1b71a9f9093bcb8a5e451dc6096724fc3bb6dfa88\" returns successfully" May 17 01:36:46.091324 env[2382]: time="2025-05-17T01:36:46.091297055Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:46.091938 env[2382]: time="2025-05-17T01:36:46.091916694Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:46.096283 env[2382]: time="2025-05-17T01:36:46.096260607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:46.097311 env[2382]: time="2025-05-17T01:36:46.097289405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:46.097766 env[2382]: time="2025-05-17T01:36:46.097745884Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 17 01:36:46.098679 env[2382]: time="2025-05-17T01:36:46.098659243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:36:46.099541 env[2382]: time="2025-05-17T01:36:46.099514601Z" level=info msg="CreateContainer within sandbox \"25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 17 01:36:46.102982 env[2382]: time="2025-05-17T01:36:46.102955556Z" level=info msg="CreateContainer within sandbox \"25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0fd11be46b69a82401416e092b204fbf1ea6e1784dc4cc0a05da09a527e78d01\"" May 17 01:36:46.103297 env[2382]: time="2025-05-17T01:36:46.103272315Z" level=info msg="StartContainer for \"0fd11be46b69a82401416e092b204fbf1ea6e1784dc4cc0a05da09a527e78d01\"" May 17 01:36:46.117935 systemd-networkd[2041]: calib6fbd1a473b: Gained IPv6LL May 17 01:36:46.149229 env[2382]: time="2025-05-17T01:36:46.149197040Z" level=info msg="StartContainer for \"0fd11be46b69a82401416e092b204fbf1ea6e1784dc4cc0a05da09a527e78d01\" returns successfully" May 17 01:36:46.239333 env[2382]: time="2025-05-17T01:36:46.239280692Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:36:46.239543 env[2382]: time="2025-05-17T01:36:46.239515372Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:36:46.239758 kubelet[3578]: E0517 01:36:46.239715 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:36:46.239998 kubelet[3578]: E0517 01:36:46.239777 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:36:46.239998 kubelet[3578]: E0517 01:36:46.239938 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnmns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:36:46.241117 kubelet[3578]: E0517 01:36:46.241084 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:36:46.309957 systemd-networkd[2041]: cali2092006e175: Gained IPv6LL May 17 01:36:46.815817 env[2382]: time="2025-05-17T01:36:46.815767705Z" level=info msg="StopPodSandbox for \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\"" May 17 01:36:46.815958 env[2382]: time="2025-05-17T01:36:46.815811945Z" level=info msg="StopPodSandbox for \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\"" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.854 [INFO][7153] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.854 [INFO][7153] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" iface="eth0" netns="/var/run/netns/cni-bc2b7be4-f157-0a97-9b5a-452b49dc1042" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.854 [INFO][7153] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" iface="eth0" netns="/var/run/netns/cni-bc2b7be4-f157-0a97-9b5a-452b49dc1042" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.855 [INFO][7153] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" iface="eth0" netns="/var/run/netns/cni-bc2b7be4-f157-0a97-9b5a-452b49dc1042" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.855 [INFO][7153] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.855 [INFO][7153] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.874 [INFO][7204] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.874 [INFO][7204] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.874 [INFO][7204] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.881 [WARNING][7204] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.881 [INFO][7204] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.882 [INFO][7204] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:46.884714 env[2382]: 2025-05-17 01:36:46.883 [INFO][7153] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:36:46.885276 env[2382]: time="2025-05-17T01:36:46.884923712Z" level=info msg="TearDown network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\" successfully" May 17 01:36:46.885276 env[2382]: time="2025-05-17T01:36:46.884953832Z" level=info msg="StopPodSandbox for \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\" returns successfully" May 17 01:36:46.885501 env[2382]: time="2025-05-17T01:36:46.885475231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf2f9,Uid:a4fc78cc-d2a7-4712-9751-e0c08da51294,Namespace:kube-system,Attempt:1,}" May 17 01:36:46.889578 systemd[1]: run-netns-cni\x2dbc2b7be4\x2df157\x2d0a97\x2d9b5a\x2d452b49dc1042.mount: Deactivated successfully. May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.855 [INFO][7149] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.855 [INFO][7149] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" iface="eth0" netns="/var/run/netns/cni-930cd683-4cce-2fc4-a7d4-bb254b1359ba" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.855 [INFO][7149] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" iface="eth0" netns="/var/run/netns/cni-930cd683-4cce-2fc4-a7d4-bb254b1359ba" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.855 [INFO][7149] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" iface="eth0" netns="/var/run/netns/cni-930cd683-4cce-2fc4-a7d4-bb254b1359ba" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.855 [INFO][7149] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.855 [INFO][7149] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.874 [INFO][7206] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.874 [INFO][7206] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.882 [INFO][7206] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.889 [WARNING][7206] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.889 [INFO][7206] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.890 [INFO][7206] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:46.893792 env[2382]: 2025-05-17 01:36:46.891 [INFO][7149] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:36:46.894115 env[2382]: time="2025-05-17T01:36:46.893924617Z" level=info msg="TearDown network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\" successfully" May 17 01:36:46.894115 env[2382]: time="2025-05-17T01:36:46.893957617Z" level=info msg="StopPodSandbox for \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\" returns successfully" May 17 01:36:46.894391 env[2382]: time="2025-05-17T01:36:46.894366056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b699d664-blxw6,Uid:c4bca88a-aef7-4116-a834-44caa9bfe63e,Namespace:calico-system,Attempt:1,}" May 17 01:36:46.898422 systemd[1]: run-netns-cni\x2d930cd683\x2d4cce\x2d2fc4\x2da7d4\x2dbb254b1359ba.mount: Deactivated successfully. May 17 01:36:46.905339 kubelet[3578]: E0517 01:36:46.905308 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:36:46.909653 kubelet[3578]: I0517 01:36:46.909610 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-sxm46" podStartSLOduration=34.909595991 podStartE2EDuration="34.909595991s" podCreationTimestamp="2025-05-17 01:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:36:46.909333232 +0000 UTC m=+41.163697553" watchObservedRunningTime="2025-05-17 01:36:46.909595991 +0000 UTC m=+41.163960312" May 17 01:36:46.914000 audit[7294]: NETFILTER_CFG table=filter:101 family=2 entries=22 op=nft_register_rule pid=7294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:46.924887 kubelet[3578]: I0517 01:36:46.924838 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-889cbc855-tdbbg" podStartSLOduration=24.871449040999998 podStartE2EDuration="25.924821286s" podCreationTimestamp="2025-05-17 01:36:21 +0000 UTC" firstStartedPulling="2025-05-17 01:36:45.045149758 +0000 UTC m=+39.299514079" lastFinishedPulling="2025-05-17 01:36:46.098522003 +0000 UTC m=+40.352886324" observedRunningTime="2025-05-17 01:36:46.91656058 +0000 UTC m=+41.170924901" watchObservedRunningTime="2025-05-17 01:36:46.924821286 +0000 UTC m=+41.179185567" May 17 01:36:46.925939 kernel: kauditd_printk_skb: 31 callbacks suppressed May 17 01:36:46.926020 kernel: audit: type=1325 audit(1747445806.914:290): table=filter:101 family=2 entries=22 op=nft_register_rule pid=7294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:46.932554 kubelet[3578]: I0517 01:36:46.932505 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-889cbc855-v2bvs" podStartSLOduration=24.111290935 podStartE2EDuration="25.932490833s" podCreationTimestamp="2025-05-17 01:36:21 +0000 UTC" firstStartedPulling="2025-05-17 01:36:44.043454784 +0000 UTC m=+38.297819065" lastFinishedPulling="2025-05-17 01:36:45.864654642 +0000 UTC m=+40.119018963" observedRunningTime="2025-05-17 01:36:46.932135274 +0000 UTC m=+41.186499595" watchObservedRunningTime="2025-05-17 01:36:46.932490833 +0000 UTC m=+41.186855154" May 17 01:36:46.914000 audit[7294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff87d0180 a2=0 a3=1 items=0 ppid=3808 pid=7294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:46.979832 systemd-networkd[2041]: cali1d845f7869a: Link UP May 17 01:36:47.006367 kernel: audit: type=1300 audit(1747445806.914:290): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=fffff87d0180 a2=0 a3=1 items=0 ppid=3808 pid=7294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.006474 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 01:36:47.006492 kernel: audit: type=1327 audit(1747445806.914:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:46.914000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.043866 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1d845f7869a: link becomes ready May 17 01:36:47.045000 audit[7294]: NETFILTER_CFG table=nat:102 family=2 entries=12 op=nft_register_rule pid=7294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.055813 systemd-networkd[2041]: cali1d845f7869a: Gained carrier May 17 01:36:47.056082 kernel: audit: type=1325 audit(1747445807.045:291): table=nat:102 family=2 entries=12 op=nft_register_rule pid=7294 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.908 [INFO][7246] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.923 [INFO][7246] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0 coredns-7c65d6cfc9- kube-system a4fc78cc-d2a7-4712-9751-e0c08da51294 954 0 2025-05-17 01:36:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 coredns-7c65d6cfc9-nf2f9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d845f7869a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.923 [INFO][7246] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.951 [INFO][7301] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" HandleID="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.951 [INFO][7301] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" HandleID="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006a1820), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3510.3.7-n-8226148b53", "pod":"coredns-7c65d6cfc9-nf2f9", "timestamp":"2025-05-17 01:36:46.951172003 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.951 [INFO][7301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.951 [INFO][7301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.951 [INFO][7301] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.959 [INFO][7301] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.962 [INFO][7301] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.966 [INFO][7301] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.967 [INFO][7301] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.969 [INFO][7301] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.969 [INFO][7301] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.970 [INFO][7301] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555 May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.973 [INFO][7301] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.976 [INFO][7301] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.6/26] block=192.168.63.0/26 handle="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.976 [INFO][7301] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.6/26] handle="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.976 [INFO][7301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:47.064572 env[2382]: 2025-05-17 01:36:46.976 [INFO][7301] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.6/26] IPv6=[] ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" HandleID="k8s-pod-network.08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:47.065150 env[2382]: 2025-05-17 01:36:46.978 [INFO][7246] cni-plugin/k8s.go 418: Populated endpoint ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4fc78cc-d2a7-4712-9751-e0c08da51294", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"coredns-7c65d6cfc9-nf2f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d845f7869a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:47.065150 env[2382]: 2025-05-17 01:36:46.978 [INFO][7246] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.6/32] ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:47.065150 env[2382]: 2025-05-17 01:36:46.978 [INFO][7246] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d845f7869a ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:47.065150 env[2382]: 2025-05-17 01:36:47.055 [INFO][7246] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:47.065150 env[2382]: 2025-05-17 01:36:47.056 [INFO][7246] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4fc78cc-d2a7-4712-9751-e0c08da51294", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555", Pod:"coredns-7c65d6cfc9-nf2f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d845f7869a", MAC:"42:1a:51:cd:f3:e3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:47.065150 env[2382]: 2025-05-17 01:36:47.063 [INFO][7246] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nf2f9" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:36:47.072958 env[2382]: time="2025-05-17T01:36:47.072852930Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:47.072958 env[2382]: time="2025-05-17T01:36:47.072893970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:47.072958 env[2382]: time="2025-05-17T01:36:47.072904690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:47.073109 env[2382]: time="2025-05-17T01:36:47.073058210Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555 pid=7352 runtime=io.containerd.runc.v2 May 17 01:36:47.045000 audit[7294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff87d0180 a2=0 a3=1 items=0 ppid=3808 pid=7294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.084604 systemd-networkd[2041]: cali0092f3aaa1e: Link UP May 17 01:36:47.136023 kernel: audit: type=1300 audit(1747445807.045:291): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff87d0180 a2=0 a3=1 items=0 ppid=3808 pid=7294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.136082 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali0092f3aaa1e: link becomes ready May 17 01:36:47.136128 kernel: audit: type=1327 audit(1747445807.045:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.149297 systemd-networkd[2041]: cali0092f3aaa1e: Gained carrier May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.920 [INFO][7262] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.932 [INFO][7262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0 calico-kube-controllers-54b699d664- calico-system c4bca88a-aef7-4116-a834-44caa9bfe63e 955 0 2025-05-17 01:36:25 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54b699d664 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 calico-kube-controllers-54b699d664-blxw6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali0092f3aaa1e [] [] }} ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.932 [INFO][7262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.953 [INFO][7307] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" HandleID="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.953 [INFO][7307] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" HandleID="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400051e7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-8226148b53", "pod":"calico-kube-controllers-54b699d664-blxw6", "timestamp":"2025-05-17 01:36:46.953671759 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.953 [INFO][7307] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.976 [INFO][7307] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:46.976 [INFO][7307] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.060 [INFO][7307] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.066 [INFO][7307] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.070 [INFO][7307] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.071 [INFO][7307] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.073 [INFO][7307] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.073 [INFO][7307] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.074 [INFO][7307] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.076 [INFO][7307] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.081 [INFO][7307] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.7/26] block=192.168.63.0/26 handle="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.081 [INFO][7307] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.7/26] handle="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.081 [INFO][7307] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:47.157419 env[2382]: 2025-05-17 01:36:47.081 [INFO][7307] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.7/26] IPv6=[] ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" HandleID="k8s-pod-network.3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:47.158036 env[2382]: 2025-05-17 01:36:47.082 [INFO][7262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0", GenerateName:"calico-kube-controllers-54b699d664-", Namespace:"calico-system", SelfLink:"", UID:"c4bca88a-aef7-4116-a834-44caa9bfe63e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b699d664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"calico-kube-controllers-54b699d664-blxw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0092f3aaa1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:47.158036 env[2382]: 2025-05-17 01:36:47.082 [INFO][7262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.7/32] ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:47.158036 env[2382]: 2025-05-17 01:36:47.083 [INFO][7262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0092f3aaa1e ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:47.158036 env[2382]: 2025-05-17 01:36:47.149 [INFO][7262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:47.158036 env[2382]: 2025-05-17 01:36:47.149 [INFO][7262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0", GenerateName:"calico-kube-controllers-54b699d664-", Namespace:"calico-system", SelfLink:"", UID:"c4bca88a-aef7-4116-a834-44caa9bfe63e", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b699d664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab", Pod:"calico-kube-controllers-54b699d664-blxw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0092f3aaa1e", MAC:"36:5d:79:28:ac:81", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:47.158036 env[2382]: 2025-05-17 01:36:47.155 [INFO][7262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab" Namespace="calico-system" Pod="calico-kube-controllers-54b699d664-blxw6" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:36:47.165904 env[2382]: time="2025-05-17T01:36:47.165852707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:47.165904 env[2382]: time="2025-05-17T01:36:47.165890947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:47.165904 env[2382]: time="2025-05-17T01:36:47.165901267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:47.166128 env[2382]: time="2025-05-17T01:36:47.166109387Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab pid=7402 runtime=io.containerd.runc.v2 May 17 01:36:47.141000 audit[7382]: NETFILTER_CFG table=filter:103 family=2 entries=19 op=nft_register_rule pid=7382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.202177 kernel: audit: type=1325 audit(1747445807.141:292): table=filter:103 family=2 entries=19 op=nft_register_rule pid=7382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.202253 kernel: audit: type=1300 audit(1747445807.141:292): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd4b4fe40 a2=0 a3=1 items=0 ppid=3808 pid=7382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.141000 audit[7382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd4b4fe40 a2=0 a3=1 items=0 ppid=3808 pid=7382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.255449 kernel: audit: type=1327 audit(1747445807.141:292): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.141000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.291033 env[2382]: time="2025-05-17T01:36:47.291001994Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nf2f9,Uid:a4fc78cc-d2a7-4712-9751-e0c08da51294,Namespace:kube-system,Attempt:1,} returns sandbox id \"08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555\"" May 17 01:36:47.292790 env[2382]: time="2025-05-17T01:36:47.292767432Z" level=info msg="CreateContainer within sandbox \"08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 01:36:47.294000 audit[7382]: NETFILTER_CFG table=nat:104 family=2 entries=33 op=nft_register_chain pid=7382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.296332 env[2382]: time="2025-05-17T01:36:47.296307986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54b699d664-blxw6,Uid:c4bca88a-aef7-4116-a834-44caa9bfe63e,Namespace:calico-system,Attempt:1,} returns sandbox id \"3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab\"" May 17 01:36:47.297313 env[2382]: time="2025-05-17T01:36:47.297293505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 17 01:36:47.294000 audit[7382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffd4b4fe40 a2=0 a3=1 items=0 ppid=3808 pid=7382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.294000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.322868 kernel: audit: type=1325 audit(1747445807.294:293): table=nat:104 family=2 entries=33 op=nft_register_chain pid=7382 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.324393 env[2382]: time="2025-05-17T01:36:47.324335183Z" level=info msg="CreateContainer within sandbox \"08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8eaa74d3ef1d11027c2fe257c42b5d519f4fc166227511d58cbb6a75c61a2a54\"" May 17 01:36:47.324662 env[2382]: time="2025-05-17T01:36:47.324639543Z" level=info msg="StartContainer for \"8eaa74d3ef1d11027c2fe257c42b5d519f4fc166227511d58cbb6a75c61a2a54\"" May 17 01:36:47.361356 env[2382]: time="2025-05-17T01:36:47.361322406Z" level=info msg="StartContainer for \"8eaa74d3ef1d11027c2fe257c42b5d519f4fc166227511d58cbb6a75c61a2a54\" returns successfully" May 17 01:36:47.816534 env[2382]: time="2025-05-17T01:36:47.816490105Z" level=info msg="StopPodSandbox for \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\"" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.851 [INFO][7564] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.851 [INFO][7564] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" iface="eth0" netns="/var/run/netns/cni-83314789-c3ac-b0fb-bdc6-d57f920ed6ea" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.851 [INFO][7564] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" iface="eth0" netns="/var/run/netns/cni-83314789-c3ac-b0fb-bdc6-d57f920ed6ea" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.851 [INFO][7564] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" iface="eth0" netns="/var/run/netns/cni-83314789-c3ac-b0fb-bdc6-d57f920ed6ea" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.851 [INFO][7564] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.851 [INFO][7564] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.869 [INFO][7583] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.869 [INFO][7583] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.869 [INFO][7583] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.876 [WARNING][7583] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.876 [INFO][7583] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.877 [INFO][7583] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:47.879990 env[2382]: 2025-05-17 01:36:47.878 [INFO][7564] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:36:47.880370 env[2382]: time="2025-05-17T01:36:47.880159847Z" level=info msg="TearDown network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\" successfully" May 17 01:36:47.880370 env[2382]: time="2025-05-17T01:36:47.880191967Z" level=info msg="StopPodSandbox for \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\" returns successfully" May 17 01:36:47.880738 env[2382]: time="2025-05-17T01:36:47.880711727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxv7j,Uid:865bbbf9-e470-43da-b4e3-f1b9c7f9d88b,Namespace:calico-system,Attempt:1,}" May 17 01:36:47.889480 systemd[1]: run-netns-cni\x2d83314789\x2dc3ac\x2db0fb\x2dbdc6\x2dd57f920ed6ea.mount: Deactivated successfully. May 17 01:36:47.909949 systemd-networkd[2041]: cali43b8052e3ad: Gained IPv6LL May 17 01:36:47.910503 kubelet[3578]: I0517 01:36:47.910482 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:36:47.910728 kubelet[3578]: I0517 01:36:47.910482 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:36:47.918337 kubelet[3578]: I0517 01:36:47.918300 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nf2f9" podStartSLOduration=35.918287109 podStartE2EDuration="35.918287109s" podCreationTimestamp="2025-05-17 01:36:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 01:36:47.917863669 +0000 UTC m=+42.172227990" watchObservedRunningTime="2025-05-17 01:36:47.918287109 +0000 UTC m=+42.172651430" May 17 01:36:47.922000 audit[7634]: NETFILTER_CFG table=filter:105 family=2 entries=16 op=nft_register_rule pid=7634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.922000 audit[7634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcb7b9200 a2=0 a3=1 items=0 ppid=3808 pid=7634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.936000 audit[7634]: NETFILTER_CFG table=nat:106 family=2 entries=42 op=nft_register_rule pid=7634 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:47.936000 audit[7634]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13428 a0=3 a1=ffffcb7b9200 a2=0 a3=1 items=0 ppid=3808 pid=7634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:47.936000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:47.960956 systemd-networkd[2041]: calif27f47d5103: Link UP May 17 01:36:47.974380 systemd-networkd[2041]: calif27f47d5103: Gained carrier May 17 01:36:47.974865 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif27f47d5103: link becomes ready May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.901 [INFO][7603] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.912 [INFO][7603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0 csi-node-driver- calico-system 865bbbf9-e470-43da-b4e3-f1b9c7f9d88b 988 0 2025-05-17 01:36:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-3510.3.7-n-8226148b53 csi-node-driver-jxv7j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif27f47d5103 [] [] }} ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.912 [INFO][7603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.932 [INFO][7628] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" HandleID="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.932 [INFO][7628] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" HandleID="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000369e40), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3510.3.7-n-8226148b53", "pod":"csi-node-driver-jxv7j", "timestamp":"2025-05-17 01:36:47.932207967 +0000 UTC"}, Hostname:"ci-3510.3.7-n-8226148b53", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.932 [INFO][7628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.932 [INFO][7628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.932 [INFO][7628] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3510.3.7-n-8226148b53' May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.941 [INFO][7628] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.944 [INFO][7628] ipam/ipam.go 394: Looking up existing affinities for host host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.947 [INFO][7628] ipam/ipam.go 511: Trying affinity for 192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.948 [INFO][7628] ipam/ipam.go 158: Attempting to load block cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.950 [INFO][7628] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.63.0/26 host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.950 [INFO][7628] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.63.0/26 handle="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.951 [INFO][7628] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.953 [INFO][7628] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.63.0/26 handle="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.958 [INFO][7628] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.63.8/26] block=192.168.63.0/26 handle="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.958 [INFO][7628] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.63.8/26] handle="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" host="ci-3510.3.7-n-8226148b53" May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.958 [INFO][7628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:36:47.981941 env[2382]: 2025-05-17 01:36:47.958 [INFO][7628] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.63.8/26] IPv6=[] ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" HandleID="k8s-pod-network.73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.982581 env[2382]: 2025-05-17 01:36:47.959 [INFO][7603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"", Pod:"csi-node-driver-jxv7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif27f47d5103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:47.982581 env[2382]: 2025-05-17 01:36:47.959 [INFO][7603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.63.8/32] ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.982581 env[2382]: 2025-05-17 01:36:47.959 [INFO][7603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif27f47d5103 ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.982581 env[2382]: 2025-05-17 01:36:47.974 [INFO][7603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.982581 env[2382]: 2025-05-17 01:36:47.974 [INFO][7603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b", ResourceVersion:"988", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf", Pod:"csi-node-driver-jxv7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif27f47d5103", MAC:"72:05:65:bb:e5:a0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:36:47.982581 env[2382]: 2025-05-17 01:36:47.980 [INFO][7603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf" Namespace="calico-system" Pod="csi-node-driver-jxv7j" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:36:47.989331 env[2382]: time="2025-05-17T01:36:47.989287919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 01:36:47.989331 env[2382]: time="2025-05-17T01:36:47.989323359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 01:36:47.989377 env[2382]: time="2025-05-17T01:36:47.989333879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 01:36:47.989502 env[2382]: time="2025-05-17T01:36:47.989476599Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf pid=7663 runtime=io.containerd.runc.v2 May 17 01:36:48.020291 env[2382]: time="2025-05-17T01:36:48.020256194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jxv7j,Uid:865bbbf9-e470-43da-b4e3-f1b9c7f9d88b,Namespace:calico-system,Attempt:1,} returns sandbox id \"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf\"" May 17 01:36:48.549956 systemd-networkd[2041]: cali0092f3aaa1e: Gained IPv6LL May 17 01:36:48.550227 systemd-networkd[2041]: cali1d845f7869a: Gained IPv6LL May 17 01:36:48.928000 audit[7751]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=7751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:48.928000 audit[7751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffff14e0870 a2=0 a3=1 items=0 ppid=3808 pid=7751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:48.928000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:48.944000 audit[7751]: NETFILTER_CFG table=nat:108 family=2 entries=54 op=nft_register_chain pid=7751 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:48.944000 audit[7751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19092 a0=3 a1=fffff14e0870 a2=0 a3=1 items=0 ppid=3808 pid=7751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:48.944000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:49.125923 systemd-networkd[2041]: calif27f47d5103: Gained IPv6LL May 17 01:36:49.288880 env[2382]: time="2025-05-17T01:36:49.288804468Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:49.289532 env[2382]: time="2025-05-17T01:36:49.289509588Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:49.290545 env[2382]: time="2025-05-17T01:36:49.290528946Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:49.291504 env[2382]: time="2025-05-17T01:36:49.291489425Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:49.292004 env[2382]: time="2025-05-17T01:36:49.291976624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 17 01:36:49.292839 env[2382]: time="2025-05-17T01:36:49.292815303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 01:36:49.298061 env[2382]: time="2025-05-17T01:36:49.298037936Z" level=info msg="CreateContainer within sandbox \"3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 17 01:36:49.301524 env[2382]: time="2025-05-17T01:36:49.301498851Z" level=info msg="CreateContainer within sandbox \"3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"454fb8431ea3f863af26804335a27655bff809951d8299cef59ba8bf88a08f92\"" May 17 01:36:49.301821 env[2382]: time="2025-05-17T01:36:49.301798291Z" level=info msg="StartContainer for \"454fb8431ea3f863af26804335a27655bff809951d8299cef59ba8bf88a08f92\"" May 17 01:36:49.345208 env[2382]: time="2025-05-17T01:36:49.345176032Z" level=info msg="StartContainer for \"454fb8431ea3f863af26804335a27655bff809951d8299cef59ba8bf88a08f92\" returns successfully" May 17 01:36:49.923870 kubelet[3578]: I0517 01:36:49.923815 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54b699d664-blxw6" podStartSLOduration=22.928117331 podStartE2EDuration="24.923799729s" podCreationTimestamp="2025-05-17 01:36:25 +0000 UTC" firstStartedPulling="2025-05-17 01:36:47.297022385 +0000 UTC m=+41.551386706" lastFinishedPulling="2025-05-17 01:36:49.292704823 +0000 UTC m=+43.547069104" observedRunningTime="2025-05-17 01:36:49.923797649 +0000 UTC m=+44.178161970" watchObservedRunningTime="2025-05-17 01:36:49.923799729 +0000 UTC m=+44.178164050" May 17 01:36:50.282125 env[2382]: time="2025-05-17T01:36:50.282040788Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:50.282692 env[2382]: time="2025-05-17T01:36:50.282666507Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:50.283843 env[2382]: time="2025-05-17T01:36:50.283829346Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:50.284844 env[2382]: time="2025-05-17T01:36:50.284829385Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:50.285357 env[2382]: time="2025-05-17T01:36:50.285332384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 17 01:36:50.287226 env[2382]: time="2025-05-17T01:36:50.287202022Z" level=info msg="CreateContainer within sandbox \"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 01:36:50.292505 env[2382]: time="2025-05-17T01:36:50.292477935Z" level=info msg="CreateContainer within sandbox \"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"87ea44efa106a0edc8ab464ce899d4666b7fbd1d459d631a56c7a8431725766b\"" May 17 01:36:50.292835 env[2382]: time="2025-05-17T01:36:50.292812735Z" level=info msg="StartContainer for \"87ea44efa106a0edc8ab464ce899d4666b7fbd1d459d631a56c7a8431725766b\"" May 17 01:36:50.334389 env[2382]: time="2025-05-17T01:36:50.334349762Z" level=info msg="StartContainer for \"87ea44efa106a0edc8ab464ce899d4666b7fbd1d459d631a56c7a8431725766b\" returns successfully" May 17 01:36:50.335216 env[2382]: time="2025-05-17T01:36:50.335193681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 01:36:50.918897 kubelet[3578]: I0517 01:36:50.918792 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:36:51.442178 env[2382]: time="2025-05-17T01:36:51.442139271Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:51.443746 env[2382]: time="2025-05-17T01:36:51.443718230Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:51.444701 env[2382]: time="2025-05-17T01:36:51.444675588Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:51.445221 env[2382]: time="2025-05-17T01:36:51.445195668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 17 01:36:51.445808 env[2382]: time="2025-05-17T01:36:51.445781667Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 01:36:51.447072 env[2382]: time="2025-05-17T01:36:51.447040826Z" level=info msg="CreateContainer within sandbox \"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 01:36:51.451719 env[2382]: time="2025-05-17T01:36:51.451687460Z" level=info msg="CreateContainer within sandbox \"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2c980ea8511e618d74c05a9b1fd088521998b4226bb8600f3c4247171c042426\"" May 17 01:36:51.452091 env[2382]: time="2025-05-17T01:36:51.452062580Z" level=info msg="StartContainer for \"2c980ea8511e618d74c05a9b1fd088521998b4226bb8600f3c4247171c042426\"" May 17 01:36:51.492844 env[2382]: time="2025-05-17T01:36:51.492800931Z" level=info msg="StartContainer for \"2c980ea8511e618d74c05a9b1fd088521998b4226bb8600f3c4247171c042426\" returns successfully" May 17 01:36:51.880519 kubelet[3578]: I0517 01:36:51.880461 3578 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 01:36:51.880519 kubelet[3578]: I0517 01:36:51.880530 3578 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 01:36:51.930169 kubelet[3578]: I0517 01:36:51.930124 3578 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jxv7j" podStartSLOduration=23.505301776 podStartE2EDuration="26.930111851s" podCreationTimestamp="2025-05-17 01:36:25 +0000 UTC" firstStartedPulling="2025-05-17 01:36:48.021088952 +0000 UTC m=+42.275453273" lastFinishedPulling="2025-05-17 01:36:51.445899067 +0000 UTC m=+45.700263348" observedRunningTime="2025-05-17 01:36:51.929523652 +0000 UTC m=+46.183887973" watchObservedRunningTime="2025-05-17 01:36:51.930111851 +0000 UTC m=+46.184476172" May 17 01:36:53.816634 env[2382]: time="2025-05-17T01:36:53.816596679Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:36:53.977797 env[2382]: time="2025-05-17T01:36:53.977737391Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:36:53.978224 env[2382]: time="2025-05-17T01:36:53.978183270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:36:53.978466 kubelet[3578]: E0517 01:36:53.978430 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:36:53.978800 kubelet[3578]: E0517 01:36:53.978781 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:36:53.978969 kubelet[3578]: E0517 01:36:53.978937 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:98d2fa92e9aa4ed787c6df8c04195565,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:36:53.980732 env[2382]: time="2025-05-17T01:36:53.980706228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:36:54.121569 env[2382]: time="2025-05-17T01:36:54.121511528Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:36:54.121816 env[2382]: time="2025-05-17T01:36:54.121793488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:36:54.121986 kubelet[3578]: E0517 01:36:54.121948 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:36:54.122029 kubelet[3578]: E0517 01:36:54.122000 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:36:54.122198 kubelet[3578]: E0517 01:36:54.122163 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:36:54.123333 kubelet[3578]: E0517 01:36:54.123304 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:36:55.017630 kubelet[3578]: I0517 01:36:55.017593 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:36:55.031000 audit[8206]: NETFILTER_CFG table=filter:109 family=2 entries=15 op=nft_register_rule pid=8206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:55.043322 kernel: kauditd_printk_skb: 14 callbacks suppressed May 17 01:36:55.043376 kernel: audit: type=1325 audit(1747445815.031:298): table=filter:109 family=2 entries=15 op=nft_register_rule pid=8206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:55.031000 audit[8206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb9bae60 a2=0 a3=1 items=0 ppid=3808 pid=8206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.126105 kernel: audit: type=1300 audit(1747445815.031:298): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcb9bae60 a2=0 a3=1 items=0 ppid=3808 pid=8206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.126152 kernel: audit: type=1327 audit(1747445815.031:298): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:55.031000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:55.154000 audit[8206]: NETFILTER_CFG table=nat:110 family=2 entries=25 op=nft_register_chain pid=8206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:55.154000 audit[8206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffcb9bae60 a2=0 a3=1 items=0 ppid=3808 pid=8206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.238995 kernel: audit: type=1325 audit(1747445815.154:299): table=nat:110 family=2 entries=25 op=nft_register_chain pid=8206 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:36:55.239058 kernel: audit: type=1300 audit(1747445815.154:299): arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffcb9bae60 a2=0 a3=1 items=0 ppid=3808 pid=8206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.239088 kernel: audit: type=1327 audit(1747445815.154:299): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:55.154000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.535130 kernel: audit: type=1400 audit(1747445815.457:300): avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.535172 kernel: audit: type=1400 audit(1747445815.457:300): avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.535187 kernel: audit: type=1400 audit(1747445815.457:300): avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.612523 kernel: audit: type=1400 audit(1747445815.457:300): avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.457000 audit: BPF prog-id=10 op=LOAD May 17 01:36:55.457000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffeb06d0c8 a2=98 a3=ffffeb06d0b8 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.457000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.496000 audit: BPF prog-id=10 op=UNLOAD May 17 01:36:55.496000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.496000 audit: BPF prog-id=11 op=LOAD May 17 01:36:55.496000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffeb06cd58 a2=74 a3=95 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.496000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.534000 audit: BPF prog-id=11 op=UNLOAD May 17 01:36:55.534000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.534000 audit: BPF prog-id=12 op=LOAD May 17 01:36:55.534000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffeb06cdb8 a2=94 a3=2 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.534000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.573000 audit: BPF prog-id=12 op=UNLOAD May 17 01:36:55.668000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit: BPF prog-id=13 op=LOAD May 17 01:36:55.668000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffeb06cd78 a2=40 a3=ffffeb06cda8 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.668000 audit: BPF prog-id=13 op=UNLOAD May 17 01:36:55.668000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.668000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffeb06ce90 a2=50 a3=0 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb06cde8 a2=28 a3=ffffeb06cf18 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb06ce18 a2=28 a3=ffffeb06cf48 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb06ccc8 a2=28 a3=ffffeb06cdf8 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb06ce38 a2=28 a3=ffffeb06cf68 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb06ce18 a2=28 a3=ffffeb06cf48 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb06ce08 a2=28 a3=ffffeb06cf38 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb06ce38 a2=28 a3=ffffeb06cf68 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb06ce18 a2=28 a3=ffffeb06cf48 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb06ce38 a2=28 a3=ffffeb06cf68 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffeb06ce08 a2=28 a3=ffffeb06cf38 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffeb06ce88 a2=28 a3=ffffeb06cfc8 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffeb06cbc0 a2=50 a3=0 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.676000 audit: BPF prog-id=14 op=LOAD May 17 01:36:55.676000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffeb06cbc8 a2=94 a3=5 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.677000 audit: BPF prog-id=14 op=UNLOAD May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffeb06ccd0 a2=50 a3=0 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffeb06ce18 a2=4 a3=3 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { confidentiality } for pid=8240 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 01:36:55.677000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffeb06cdf8 a2=94 a3=6 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { confidentiality } for pid=8240 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 01:36:55.677000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffeb06c5c8 a2=94 a3=83 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { perfmon } for pid=8240 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { bpf } for pid=8240 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.677000 audit[8240]: AVC avc: denied { confidentiality } for pid=8240 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 01:36:55.677000 audit[8240]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffeb06c5c8 a2=94 a3=83 items=0 ppid=8209 pid=8240 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.677000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit: BPF prog-id=15 op=LOAD May 17 01:36:55.689000 audit[8243]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe1aaf8c8 a2=98 a3=ffffe1aaf8b8 items=0 ppid=8209 pid=8243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 01:36:55.689000 audit: BPF prog-id=15 op=UNLOAD May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.689000 audit: BPF prog-id=16 op=LOAD May 17 01:36:55.689000 audit[8243]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe1aaf778 a2=74 a3=95 items=0 ppid=8209 pid=8243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 01:36:55.690000 audit: BPF prog-id=16 op=UNLOAD May 17 01:36:55.690000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { perfmon } for pid=8243 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit[8243]: AVC avc: denied { bpf } for pid=8243 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.690000 audit: BPF prog-id=17 op=LOAD May 17 01:36:55.690000 audit[8243]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe1aaf7a8 a2=40 a3=ffffe1aaf7d8 items=0 ppid=8209 pid=8243 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.690000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F May 17 01:36:55.690000 audit: BPF prog-id=17 op=UNLOAD May 17 01:36:55.743297 systemd-networkd[2041]: vxlan.calico: Link UP May 17 01:36:55.743302 systemd-networkd[2041]: vxlan.calico: Gained carrier May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit: BPF prog-id=18 op=LOAD May 17 01:36:55.771000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0599a48 a2=98 a3=ffffc0599a38 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.771000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.771000 audit: BPF prog-id=18 op=UNLOAD May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.771000 audit: BPF prog-id=19 op=LOAD May 17 01:36:55.771000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0599728 a2=74 a3=95 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.771000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit: BPF prog-id=19 op=UNLOAD May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit: BPF prog-id=20 op=LOAD May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc0599788 a2=94 a3=2 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit: BPF prog-id=20 op=UNLOAD May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc05997b8 a2=28 a3=ffffc05998e8 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc05997e8 a2=28 a3=ffffc0599918 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc0599698 a2=28 a3=ffffc05997c8 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc0599808 a2=28 a3=ffffc0599938 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc05997e8 a2=28 a3=ffffc0599918 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc05997d8 a2=28 a3=ffffc0599908 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc0599808 a2=28 a3=ffffc0599938 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc05997e8 a2=28 a3=ffffc0599918 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc0599808 a2=28 a3=ffffc0599938 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc05997d8 a2=28 a3=ffffc0599908 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc0599858 a2=28 a3=ffffc0599998 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit: BPF prog-id=21 op=LOAD May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0599678 a2=40 a3=ffffc05996a8 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit: BPF prog-id=21 op=UNLOAD May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffc05996a0 a2=50 a3=0 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffc05996a0 a2=50 a3=0 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.772000 audit: BPF prog-id=22 op=LOAD May 17 01:36:55.772000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0598e08 a2=94 a3=2 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.772000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.773000 audit: BPF prog-id=22 op=UNLOAD May 17 01:36:55.773000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { perfmon } for pid=8282 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit[8282]: AVC avc: denied { bpf } for pid=8282 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.773000 audit: BPF prog-id=23 op=LOAD May 17 01:36:55.773000 audit[8282]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc0598f98 a2=94 a3=30 items=0 ppid=8209 pid=8282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.773000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit: BPF prog-id=24 op=LOAD May 17 01:36:55.775000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc2c28318 a2=98 a3=ffffc2c28308 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.775000 audit: BPF prog-id=24 op=UNLOAD May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit: BPF prog-id=25 op=LOAD May 17 01:36:55.775000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc2c27fa8 a2=74 a3=95 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.775000 audit: BPF prog-id=25 op=UNLOAD May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.775000 audit: BPF prog-id=26 op=LOAD May 17 01:36:55.775000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc2c28008 a2=94 a3=2 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.775000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.775000 audit: BPF prog-id=26 op=UNLOAD May 17 01:36:55.872000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.872000 audit: BPF prog-id=27 op=LOAD May 17 01:36:55.872000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc2c27fc8 a2=40 a3=ffffc2c27ff8 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.872000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.873000 audit: BPF prog-id=27 op=UNLOAD May 17 01:36:55.873000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.873000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc2c280e0 a2=50 a3=0 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.873000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc2c28038 a2=28 a3=ffffc2c28168 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc2c28068 a2=28 a3=ffffc2c28198 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc2c27f18 a2=28 a3=ffffc2c28048 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc2c28088 a2=28 a3=ffffc2c281b8 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc2c28068 a2=28 a3=ffffc2c28198 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc2c28058 a2=28 a3=ffffc2c28188 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc2c28088 a2=28 a3=ffffc2c281b8 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc2c28068 a2=28 a3=ffffc2c28198 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc2c28088 a2=28 a3=ffffc2c281b8 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc2c28058 a2=28 a3=ffffc2c28188 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc2c280d8 a2=28 a3=ffffc2c28218 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc2c27e10 a2=50 a3=0 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit: BPF prog-id=28 op=LOAD May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc2c27e18 a2=94 a3=5 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit: BPF prog-id=28 op=UNLOAD May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc2c27f20 a2=50 a3=0 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc2c28068 a2=4 a3=3 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { confidentiality } for pid=8286 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc2c28048 a2=94 a3=6 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.881000 audit[8286]: AVC avc: denied { confidentiality } for pid=8286 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 01:36:55.881000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc2c27818 a2=94 a3=83 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.881000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { perfmon } for pid=8286 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { confidentiality } for pid=8286 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 May 17 01:36:55.882000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc2c27818 a2=94 a3=83 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc2c29258 a2=10 a3=ffffc2c29348 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc2c29118 a2=10 a3=ffffc2c29208 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc2c29088 a2=10 a3=ffffc2c29208 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.882000 audit[8286]: AVC avc: denied { bpf } for pid=8286 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 May 17 01:36:55.882000 audit[8286]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc2c29088 a2=10 a3=ffffc2c29208 items=0 ppid=8209 pid=8286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.882000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 May 17 01:36:55.896000 audit: BPF prog-id=23 op=UNLOAD May 17 01:36:55.948000 audit[8529]: NETFILTER_CFG table=mangle:111 family=2 entries=16 op=nft_register_chain pid=8529 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 01:36:55.948000 audit[8529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffee21e080 a2=0 a3=ffff98725fa8 items=0 ppid=8209 pid=8529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.948000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 01:36:55.953000 audit[8535]: NETFILTER_CFG table=nat:112 family=2 entries=15 op=nft_register_chain pid=8535 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 01:36:55.953000 audit[8535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffdad47b50 a2=0 a3=ffffbf395fa8 items=0 ppid=8209 pid=8535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.953000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 01:36:55.958000 audit[8528]: NETFILTER_CFG table=raw:113 family=2 entries=21 op=nft_register_chain pid=8528 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 01:36:55.958000 audit[8528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffffdda25e0 a2=0 a3=ffffbd638fa8 items=0 ppid=8209 pid=8528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.958000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 01:36:55.963000 audit[8538]: NETFILTER_CFG table=filter:114 family=2 entries=315 op=nft_register_chain pid=8538 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" May 17 01:36:55.963000 audit[8538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=187764 a0=3 a1=ffffcfddc070 a2=0 a3=ffffa529ffa8 items=0 ppid=8209 pid=8538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:36:55.963000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 May 17 01:36:57.381995 systemd-networkd[2041]: vxlan.calico: Gained IPv6LL May 17 01:37:00.816935 env[2382]: time="2025-05-17T01:37:00.816895149Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:37:00.969822 env[2382]: time="2025-05-17T01:37:00.969762088Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:37:00.970086 env[2382]: time="2025-05-17T01:37:00.970060767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:37:00.970243 kubelet[3578]: E0517 01:37:00.970187 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:37:00.970465 kubelet[3578]: E0517 01:37:00.970257 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:37:00.970465 kubelet[3578]: E0517 01:37:00.970414 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnmns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:37:00.971583 kubelet[3578]: E0517 01:37:00.971559 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:37:00.993910 kubelet[3578]: I0517 01:37:00.993885 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:37:01.017000 audit[8575]: NETFILTER_CFG table=filter:115 family=2 entries=13 op=nft_register_rule pid=8575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:01.029346 kernel: kauditd_printk_skb: 476 callbacks suppressed May 17 01:37:01.029380 kernel: audit: type=1325 audit(1747445821.017:395): table=filter:115 family=2 entries=13 op=nft_register_rule pid=8575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:01.017000 audit[8575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffc3975fd0 a2=0 a3=1 items=0 ppid=3808 pid=8575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:01.112594 kernel: audit: type=1300 audit(1747445821.017:395): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffc3975fd0 a2=0 a3=1 items=0 ppid=3808 pid=8575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:01.112664 kernel: audit: type=1327 audit(1747445821.017:395): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:01.017000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:01.112000 audit[8575]: NETFILTER_CFG table=nat:116 family=2 entries=27 op=nft_register_chain pid=8575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:01.167409 kernel: audit: type=1325 audit(1747445821.112:396): table=nat:116 family=2 entries=27 op=nft_register_chain pid=8575 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:01.167434 kernel: audit: type=1300 audit(1747445821.112:396): arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffc3975fd0 a2=0 a3=1 items=0 ppid=3808 pid=8575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:01.112000 audit[8575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffc3975fd0 a2=0 a3=1 items=0 ppid=3808 pid=8575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:01.112000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:01.250005 kernel: audit: type=1327 audit(1747445821.112:396): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:01.390826 kubelet[3578]: I0517 01:37:01.390789 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:37:04.817088 kubelet[3578]: E0517 01:37:04.817041 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:37:05.808845 env[2382]: time="2025-05-17T01:37:05.808811607Z" level=info msg="StopPodSandbox for \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\"" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.838 [WARNING][8627] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.838 [INFO][8627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.838 [INFO][8627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" iface="eth0" netns="" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.838 [INFO][8627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.838 [INFO][8627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.855 [INFO][8648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.855 [INFO][8648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.855 [INFO][8648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.864 [WARNING][8648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.864 [INFO][8648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.865 [INFO][8648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:05.867629 env[2382]: 2025-05-17 01:37:05.866 [INFO][8627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.868000 env[2382]: time="2025-05-17T01:37:05.867649818Z" level=info msg="TearDown network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\" successfully" May 17 01:37:05.868000 env[2382]: time="2025-05-17T01:37:05.867675498Z" level=info msg="StopPodSandbox for \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\" returns successfully" May 17 01:37:05.868049 env[2382]: time="2025-05-17T01:37:05.868013098Z" level=info msg="RemovePodSandbox for \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\"" May 17 01:37:05.868074 env[2382]: time="2025-05-17T01:37:05.868046778Z" level=info msg="Forcibly stopping sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\"" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.899 [WARNING][8675] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" WorkloadEndpoint="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.899 [INFO][8675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.899 [INFO][8675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" iface="eth0" netns="" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.899 [INFO][8675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.899 [INFO][8675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.916 [INFO][8693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.916 [INFO][8693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.917 [INFO][8693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.924 [WARNING][8693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.924 [INFO][8693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" HandleID="k8s-pod-network.8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" Workload="ci--3510.3.7--n--8226148b53-k8s-whisker--88b655df4--xc9hf-eth0" May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.925 [INFO][8693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:05.928109 env[2382]: 2025-05-17 01:37:05.926 [INFO][8675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c" May 17 01:37:05.928404 env[2382]: time="2025-05-17T01:37:05.928144229Z" level=info msg="TearDown network for sandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\" successfully" May 17 01:37:05.929768 env[2382]: time="2025-05-17T01:37:05.929743349Z" level=info msg="RemovePodSandbox \"8561c7510ebd1db8d63a65ba5d57c9c27a8b222a60f0ae510ae9abfcd441710c\" returns successfully" May 17 01:37:05.930213 env[2382]: time="2025-05-17T01:37:05.930175148Z" level=info msg="StopPodSandbox for \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\"" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.960 [WARNING][8719] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"66187612-efc7-4292-8594-c39d74617f30", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565", Pod:"calico-apiserver-889cbc855-tdbbg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6fbd1a473b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.960 [INFO][8719] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.960 [INFO][8719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" iface="eth0" netns="" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.960 [INFO][8719] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.960 [INFO][8719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.978 [INFO][8745] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.978 [INFO][8745] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.978 [INFO][8745] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.989 [WARNING][8745] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.989 [INFO][8745] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.990 [INFO][8745] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:05.993484 env[2382]: 2025-05-17 01:37:05.992 [INFO][8719] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:05.993788 env[2382]: time="2025-05-17T01:37:05.993514758Z" level=info msg="TearDown network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\" successfully" May 17 01:37:05.993788 env[2382]: time="2025-05-17T01:37:05.993545118Z" level=info msg="StopPodSandbox for \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\" returns successfully" May 17 01:37:05.993788 env[2382]: time="2025-05-17T01:37:05.993755238Z" level=info msg="RemovePodSandbox for \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\"" May 17 01:37:05.993861 env[2382]: time="2025-05-17T01:37:05.993783478Z" level=info msg="Forcibly stopping sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\"" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.024 [WARNING][8775] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"66187612-efc7-4292-8594-c39d74617f30", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"25401365bdd327f1bc01dfb8001f01b31a0e06b7ae921882691ca290fce49565", Pod:"calico-apiserver-889cbc855-tdbbg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib6fbd1a473b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.024 [INFO][8775] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.024 [INFO][8775] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" iface="eth0" netns="" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.024 [INFO][8775] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.024 [INFO][8775] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.042 [INFO][8796] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.042 [INFO][8796] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.042 [INFO][8796] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.049 [WARNING][8796] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.049 [INFO][8796] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" HandleID="k8s-pod-network.f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--tdbbg-eth0" May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.050 [INFO][8796] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.053358 env[2382]: 2025-05-17 01:37:06.052 [INFO][8775] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5" May 17 01:37:06.053766 env[2382]: time="2025-05-17T01:37:06.053390891Z" level=info msg="TearDown network for sandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\" successfully" May 17 01:37:06.057442 env[2382]: time="2025-05-17T01:37:06.057417209Z" level=info msg="RemovePodSandbox \"f9e66e5dd1f9b3baecffe1bc1c987d707df016acab95ce74eedf01262fe87aa5\" returns successfully" May 17 01:37:06.057852 env[2382]: time="2025-05-17T01:37:06.057831609Z" level=info msg="StopPodSandbox for \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\"" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.087 [WARNING][8827] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b", Pod:"coredns-7c65d6cfc9-sxm46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8052e3ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.088 [INFO][8827] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.088 [INFO][8827] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" iface="eth0" netns="" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.088 [INFO][8827] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.088 [INFO][8827] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.105 [INFO][8848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.105 [INFO][8848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.105 [INFO][8848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.112 [WARNING][8848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.112 [INFO][8848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.113 [INFO][8848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.116264 env[2382]: 2025-05-17 01:37:06.114 [INFO][8827] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.116264 env[2382]: time="2025-05-17T01:37:06.116242702Z" level=info msg="TearDown network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\" successfully" May 17 01:37:06.116664 env[2382]: time="2025-05-17T01:37:06.116270102Z" level=info msg="StopPodSandbox for \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\" returns successfully" May 17 01:37:06.116691 env[2382]: time="2025-05-17T01:37:06.116661342Z" level=info msg="RemovePodSandbox for \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\"" May 17 01:37:06.116737 env[2382]: time="2025-05-17T01:37:06.116696382Z" level=info msg="Forcibly stopping sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\"" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.146 [WARNING][8876] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cbe86ea2-e6e9-4576-adbf-bcd51e11fb88", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"2bae6bab4a3086ea787251ad80ad26dc31129203d4de07aef33220eb4fd5195b", Pod:"coredns-7c65d6cfc9-sxm46", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43b8052e3ad", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.146 [INFO][8876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.146 [INFO][8876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" iface="eth0" netns="" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.146 [INFO][8876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.146 [INFO][8876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.164 [INFO][8897] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.164 [INFO][8897] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.164 [INFO][8897] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.171 [WARNING][8897] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.171 [INFO][8897] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" HandleID="k8s-pod-network.4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--sxm46-eth0" May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.172 [INFO][8897] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.174728 env[2382]: 2025-05-17 01:37:06.173 [INFO][8876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8" May 17 01:37:06.175029 env[2382]: time="2025-05-17T01:37:06.174772076Z" level=info msg="TearDown network for sandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\" successfully" May 17 01:37:06.176338 env[2382]: time="2025-05-17T01:37:06.176315395Z" level=info msg="RemovePodSandbox \"4c1dc3e85c19b8facb4a29579507b1ec757fa83bf8a0b5ae8e9996b0223367f8\" returns successfully" May 17 01:37:06.176656 env[2382]: time="2025-05-17T01:37:06.176631195Z" level=info msg="StopPodSandbox for \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\"" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.206 [WARNING][8926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f14c134-cd19-42e5-bbc4-b6bd9f685215", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9", Pod:"calico-apiserver-889cbc855-v2bvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f821384a61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.206 [INFO][8926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.206 [INFO][8926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" iface="eth0" netns="" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.206 [INFO][8926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.206 [INFO][8926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.224 [INFO][8948] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.224 [INFO][8948] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.224 [INFO][8948] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.232 [WARNING][8948] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.232 [INFO][8948] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.233 [INFO][8948] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.235675 env[2382]: 2025-05-17 01:37:06.234 [INFO][8926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.235961 env[2382]: time="2025-05-17T01:37:06.235698208Z" level=info msg="TearDown network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\" successfully" May 17 01:37:06.235961 env[2382]: time="2025-05-17T01:37:06.235724688Z" level=info msg="StopPodSandbox for \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\" returns successfully" May 17 01:37:06.236031 env[2382]: time="2025-05-17T01:37:06.236005688Z" level=info msg="RemovePodSandbox for \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\"" May 17 01:37:06.236071 env[2382]: time="2025-05-17T01:37:06.236043208Z" level=info msg="Forcibly stopping sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\"" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.266 [WARNING][8976] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0", GenerateName:"calico-apiserver-889cbc855-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f14c134-cd19-42e5-bbc4-b6bd9f685215", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"889cbc855", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"5047bd5c28fef562a41c623ffa68f4cda9c8f2db2a43050c26b999adccf06de9", Pod:"calico-apiserver-889cbc855-v2bvs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.63.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3f821384a61", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.266 [INFO][8976] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.266 [INFO][8976] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" iface="eth0" netns="" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.266 [INFO][8976] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.266 [INFO][8976] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.283 [INFO][8995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.283 [INFO][8995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.283 [INFO][8995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.290 [WARNING][8995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.290 [INFO][8995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" HandleID="k8s-pod-network.95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--apiserver--889cbc855--v2bvs-eth0" May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.291 [INFO][8995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.294308 env[2382]: 2025-05-17 01:37:06.292 [INFO][8976] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01" May 17 01:37:06.294588 env[2382]: time="2025-05-17T01:37:06.294347262Z" level=info msg="TearDown network for sandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\" successfully" May 17 01:37:06.295908 env[2382]: time="2025-05-17T01:37:06.295884741Z" level=info msg="RemovePodSandbox \"95dbf81c53e15240c7ad43018302a5719885f82435672b3ee5102448965fec01\" returns successfully" May 17 01:37:06.296259 env[2382]: time="2025-05-17T01:37:06.296236541Z" level=info msg="StopPodSandbox for \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\"" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.327 [WARNING][9024] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c4371095-cd92-4ea8-9564-c86ac0de3064", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7", Pod:"goldmane-8f77d7b6c-czxt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2092006e175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.327 [INFO][9024] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.327 [INFO][9024] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" iface="eth0" netns="" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.327 [INFO][9024] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.327 [INFO][9024] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.345 [INFO][9044] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.345 [INFO][9044] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.345 [INFO][9044] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.353 [WARNING][9044] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.353 [INFO][9044] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.354 [INFO][9044] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.357132 env[2382]: 2025-05-17 01:37:06.355 [INFO][9024] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.357574 env[2382]: time="2025-05-17T01:37:06.357149953Z" level=info msg="TearDown network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\" successfully" May 17 01:37:06.357574 env[2382]: time="2025-05-17T01:37:06.357180593Z" level=info msg="StopPodSandbox for \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\" returns successfully" May 17 01:37:06.357574 env[2382]: time="2025-05-17T01:37:06.357506313Z" level=info msg="RemovePodSandbox for \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\"" May 17 01:37:06.357574 env[2382]: time="2025-05-17T01:37:06.357538513Z" level=info msg="Forcibly stopping sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\"" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.388 [WARNING][9073] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"c4371095-cd92-4ea8-9564-c86ac0de3064", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"25a2d084b605452e7f2b021cece860eebcf56fa0a307bf93e5e95eafea9a74c7", Pod:"goldmane-8f77d7b6c-czxt2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.63.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2092006e175", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.388 [INFO][9073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.388 [INFO][9073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" iface="eth0" netns="" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.388 [INFO][9073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.388 [INFO][9073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.406 [INFO][9096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.406 [INFO][9096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.406 [INFO][9096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.413 [WARNING][9096] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.413 [INFO][9096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" HandleID="k8s-pod-network.9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" Workload="ci--3510.3.7--n--8226148b53-k8s-goldmane--8f77d7b6c--czxt2-eth0" May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.414 [INFO][9096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.417451 env[2382]: 2025-05-17 01:37:06.416 [INFO][9073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92" May 17 01:37:06.417837 env[2382]: time="2025-05-17T01:37:06.417471406Z" level=info msg="TearDown network for sandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\" successfully" May 17 01:37:06.419049 env[2382]: time="2025-05-17T01:37:06.419024485Z" level=info msg="RemovePodSandbox \"9436b8bd3da051194c642769449b03cc2ee0d34218c534e468283cf060b40c92\" returns successfully" May 17 01:37:06.419348 env[2382]: time="2025-05-17T01:37:06.419328445Z" level=info msg="StopPodSandbox for \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\"" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.449 [WARNING][9128] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0", GenerateName:"calico-kube-controllers-54b699d664-", Namespace:"calico-system", SelfLink:"", UID:"c4bca88a-aef7-4116-a834-44caa9bfe63e", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b699d664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab", Pod:"calico-kube-controllers-54b699d664-blxw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0092f3aaa1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.449 [INFO][9128] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.449 [INFO][9128] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" iface="eth0" netns="" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.449 [INFO][9128] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.449 [INFO][9128] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.466 [INFO][9151] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.467 [INFO][9151] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.467 [INFO][9151] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.474 [WARNING][9151] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.474 [INFO][9151] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.475 [INFO][9151] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.477825 env[2382]: 2025-05-17 01:37:06.476 [INFO][9128] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.478271 env[2382]: time="2025-05-17T01:37:06.477831819Z" level=info msg="TearDown network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\" successfully" May 17 01:37:06.478271 env[2382]: time="2025-05-17T01:37:06.477856299Z" level=info msg="StopPodSandbox for \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\" returns successfully" May 17 01:37:06.478271 env[2382]: time="2025-05-17T01:37:06.478154019Z" level=info msg="RemovePodSandbox for \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\"" May 17 01:37:06.478271 env[2382]: time="2025-05-17T01:37:06.478179939Z" level=info msg="Forcibly stopping sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\"" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.508 [WARNING][9180] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0", GenerateName:"calico-kube-controllers-54b699d664-", Namespace:"calico-system", SelfLink:"", UID:"c4bca88a-aef7-4116-a834-44caa9bfe63e", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54b699d664", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"3583b33bcc0002854bf43c727e1eb36679e83a9ee3cb97fbad33205fa7364eab", Pod:"calico-kube-controllers-54b699d664-blxw6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.63.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali0092f3aaa1e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.509 [INFO][9180] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.509 [INFO][9180] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" iface="eth0" netns="" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.509 [INFO][9180] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.509 [INFO][9180] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.526 [INFO][9196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.526 [INFO][9196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.526 [INFO][9196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.534 [WARNING][9196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.534 [INFO][9196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" HandleID="k8s-pod-network.19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" Workload="ci--3510.3.7--n--8226148b53-k8s-calico--kube--controllers--54b699d664--blxw6-eth0" May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.535 [INFO][9196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.537562 env[2382]: 2025-05-17 01:37:06.536 [INFO][9180] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde" May 17 01:37:06.537970 env[2382]: time="2025-05-17T01:37:06.537596392Z" level=info msg="TearDown network for sandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\" successfully" May 17 01:37:06.539209 env[2382]: time="2025-05-17T01:37:06.539183831Z" level=info msg="RemovePodSandbox \"19df9716bf6905a955a54fe5b5c367cc75070f9bf8995578b6d4130b576a1bde\" returns successfully" May 17 01:37:06.539577 env[2382]: time="2025-05-17T01:37:06.539558311Z" level=info msg="StopPodSandbox for \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\"" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.568 [WARNING][9226] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf", Pod:"csi-node-driver-jxv7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif27f47d5103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.569 [INFO][9226] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.569 [INFO][9226] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" iface="eth0" netns="" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.569 [INFO][9226] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.569 [INFO][9226] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.585 [INFO][9239] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.585 [INFO][9239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.586 [INFO][9239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.593 [WARNING][9239] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.593 [INFO][9239] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.594 [INFO][9239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.596559 env[2382]: 2025-05-17 01:37:06.595 [INFO][9226] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.597045 env[2382]: time="2025-05-17T01:37:06.596589485Z" level=info msg="TearDown network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\" successfully" May 17 01:37:06.597045 env[2382]: time="2025-05-17T01:37:06.596622965Z" level=info msg="StopPodSandbox for \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\" returns successfully" May 17 01:37:06.597045 env[2382]: time="2025-05-17T01:37:06.596942445Z" level=info msg="RemovePodSandbox for \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\"" May 17 01:37:06.597045 env[2382]: time="2025-05-17T01:37:06.596976685Z" level=info msg="Forcibly stopping sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\"" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.626 [WARNING][9267] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"865bbbf9-e470-43da-b4e3-f1b9c7f9d88b", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"73b6d6e1d01e85a9f19aa211f76cb5900e9ef28247604ecfb7f8a330e7c689bf", Pod:"csi-node-driver-jxv7j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.63.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif27f47d5103", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.626 [INFO][9267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.626 [INFO][9267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" iface="eth0" netns="" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.626 [INFO][9267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.626 [INFO][9267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.643 [INFO][9284] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.644 [INFO][9284] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.644 [INFO][9284] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.651 [WARNING][9284] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.651 [INFO][9284] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" HandleID="k8s-pod-network.11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" Workload="ci--3510.3.7--n--8226148b53-k8s-csi--node--driver--jxv7j-eth0" May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.652 [INFO][9284] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.654689 env[2382]: 2025-05-17 01:37:06.653 [INFO][9267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d" May 17 01:37:06.655064 env[2382]: time="2025-05-17T01:37:06.654709859Z" level=info msg="TearDown network for sandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\" successfully" May 17 01:37:06.656314 env[2382]: time="2025-05-17T01:37:06.656290658Z" level=info msg="RemovePodSandbox \"11b52f82144de40182a6e5987d2cc226fef23efb714c59b1f3b99bd7d4cd317d\" returns successfully" May 17 01:37:06.656647 env[2382]: time="2025-05-17T01:37:06.656623818Z" level=info msg="StopPodSandbox for \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\"" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.686 [WARNING][9314] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4fc78cc-d2a7-4712-9751-e0c08da51294", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555", Pod:"coredns-7c65d6cfc9-nf2f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d845f7869a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.686 [INFO][9314] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.686 [INFO][9314] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" iface="eth0" netns="" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.686 [INFO][9314] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.686 [INFO][9314] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.703 [INFO][9333] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.703 [INFO][9333] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.703 [INFO][9333] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.710 [WARNING][9333] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.710 [INFO][9333] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.714 [INFO][9333] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.717028 env[2382]: 2025-05-17 01:37:06.715 [INFO][9314] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.717028 env[2382]: time="2025-05-17T01:37:06.716944431Z" level=info msg="TearDown network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\" successfully" May 17 01:37:06.717028 env[2382]: time="2025-05-17T01:37:06.716971711Z" level=info msg="StopPodSandbox for \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\" returns successfully" May 17 01:37:06.717438 env[2382]: time="2025-05-17T01:37:06.717293151Z" level=info msg="RemovePodSandbox for \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\"" May 17 01:37:06.717438 env[2382]: time="2025-05-17T01:37:06.717329591Z" level=info msg="Forcibly stopping sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\"" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.748 [WARNING][9363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a4fc78cc-d2a7-4712-9751-e0c08da51294", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 1, 36, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3510.3.7-n-8226148b53", ContainerID:"08c1c610361b869b2a269551c8433b57d6ff063a1ea7a0a88ddb37002769a555", Pod:"coredns-7c65d6cfc9-nf2f9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.63.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d845f7869a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.748 [INFO][9363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.748 [INFO][9363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" iface="eth0" netns="" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.748 [INFO][9363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.748 [INFO][9363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.765 [INFO][9385] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.765 [INFO][9385] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.765 [INFO][9385] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.773 [WARNING][9385] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.773 [INFO][9385] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" HandleID="k8s-pod-network.28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" Workload="ci--3510.3.7--n--8226148b53-k8s-coredns--7c65d6cfc9--nf2f9-eth0" May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.774 [INFO][9385] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 01:37:06.777187 env[2382]: 2025-05-17 01:37:06.775 [INFO][9363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3" May 17 01:37:06.777582 env[2382]: time="2025-05-17T01:37:06.777201404Z" level=info msg="TearDown network for sandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\" successfully" May 17 01:37:06.778787 env[2382]: time="2025-05-17T01:37:06.778764163Z" level=info msg="RemovePodSandbox \"28f231b52b1155e4f6bec2ac6458f7ca37f5a74c8a127964684b77a03a3697c3\" returns successfully" May 17 01:37:15.046593 kubelet[3578]: I0517 01:37:15.046553 3578 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 01:37:15.069000 audit[9446]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=9446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:15.069000 audit[9446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffca6f4240 a2=0 a3=1 items=0 ppid=3808 pid=9446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:15.154382 kernel: audit: type=1325 audit(1747445835.069:397): table=filter:117 family=2 entries=12 op=nft_register_rule pid=9446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:15.154565 kernel: audit: type=1300 audit(1747445835.069:397): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffca6f4240 a2=0 a3=1 items=0 ppid=3808 pid=9446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:15.154585 kernel: audit: type=1327 audit(1747445835.069:397): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:15.069000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:15.182000 audit[9446]: NETFILTER_CFG table=nat:118 family=2 entries=34 op=nft_register_chain pid=9446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:15.211863 kernel: audit: type=1325 audit(1747445835.182:398): table=nat:118 family=2 entries=34 op=nft_register_chain pid=9446 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:37:15.211916 kernel: audit: type=1300 audit(1747445835.182:398): arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffca6f4240 a2=0 a3=1 items=0 ppid=3808 pid=9446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:15.182000 audit[9446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11236 a0=3 a1=ffffca6f4240 a2=0 a3=1 items=0 ppid=3808 pid=9446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:37:15.182000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:15.294817 kernel: audit: type=1327 audit(1747445835.182:398): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:37:15.816518 kubelet[3578]: E0517 01:37:15.816473 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:37:18.816637 env[2382]: time="2025-05-17T01:37:18.816599128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:37:18.958981 env[2382]: time="2025-05-17T01:37:18.958915819Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:37:18.959205 env[2382]: time="2025-05-17T01:37:18.959180739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:37:18.959370 kubelet[3578]: E0517 01:37:18.959338 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:37:18.959607 kubelet[3578]: E0517 01:37:18.959381 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:37:18.959607 kubelet[3578]: E0517 01:37:18.959476 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:98d2fa92e9aa4ed787c6df8c04195565,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:37:18.961284 env[2382]: time="2025-05-17T01:37:18.961266258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:37:19.114918 env[2382]: time="2025-05-17T01:37:19.114784348Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:37:19.115067 env[2382]: time="2025-05-17T01:37:19.114998348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:37:19.115193 kubelet[3578]: E0517 01:37:19.115149 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:37:19.115338 kubelet[3578]: E0517 01:37:19.115320 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:37:19.115512 kubelet[3578]: E0517 01:37:19.115477 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:37:19.116803 kubelet[3578]: E0517 01:37:19.116751 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:37:27.817157 env[2382]: time="2025-05-17T01:37:27.817116751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:37:27.944470 env[2382]: time="2025-05-17T01:37:27.944399789Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:37:27.944832 env[2382]: time="2025-05-17T01:37:27.944803837Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:37:27.945075 kubelet[3578]: E0517 01:37:27.945040 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:37:27.945394 kubelet[3578]: E0517 01:37:27.945374 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:37:27.945584 kubelet[3578]: E0517 01:37:27.945542 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnmns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:37:27.946816 kubelet[3578]: E0517 01:37:27.946789 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:37:32.816915 kubelet[3578]: E0517 01:37:32.816870 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:37:40.817008 kubelet[3578]: E0517 01:37:40.816970 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:37:46.817297 kubelet[3578]: E0517 01:37:46.817257 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:37:54.816408 kubelet[3578]: E0517 01:37:54.816372 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:38:00.816307 env[2382]: time="2025-05-17T01:38:00.816260470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:38:00.970030 env[2382]: time="2025-05-17T01:38:00.969976680Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:38:00.970275 env[2382]: time="2025-05-17T01:38:00.970252483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:38:00.970405 kubelet[3578]: E0517 01:38:00.970367 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:38:00.970622 kubelet[3578]: E0517 01:38:00.970420 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:38:00.970622 kubelet[3578]: E0517 01:38:00.970538 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:98d2fa92e9aa4ed787c6df8c04195565,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:38:00.972176 env[2382]: time="2025-05-17T01:38:00.972155859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:38:01.158010 env[2382]: time="2025-05-17T01:38:01.157962870Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:38:01.158248 env[2382]: time="2025-05-17T01:38:01.158223152Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:38:01.158408 kubelet[3578]: E0517 01:38:01.158370 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:38:01.158443 kubelet[3578]: E0517 01:38:01.158421 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:38:01.158617 kubelet[3578]: E0517 01:38:01.158563 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:38:01.159763 kubelet[3578]: E0517 01:38:01.159732 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:38:06.816160 kubelet[3578]: E0517 01:38:06.816123 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:38:16.816853 kubelet[3578]: E0517 01:38:16.816817 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:38:17.816799 env[2382]: time="2025-05-17T01:38:17.816751900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:38:18.181920 env[2382]: time="2025-05-17T01:38:18.181867902Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:38:18.182174 env[2382]: time="2025-05-17T01:38:18.182151783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:38:18.182287 kubelet[3578]: E0517 01:38:18.182260 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:38:18.182507 kubelet[3578]: E0517 01:38:18.182297 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:38:18.182507 kubelet[3578]: E0517 01:38:18.182400 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnmns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:38:18.183514 kubelet[3578]: E0517 01:38:18.183491 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:38:28.816706 kubelet[3578]: E0517 01:38:28.816538 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:38:31.816873 kubelet[3578]: E0517 01:38:31.816827 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:38:40.817117 kubelet[3578]: E0517 01:38:40.817072 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:38:42.816691 kubelet[3578]: E0517 01:38:42.816660 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:38:52.816689 kubelet[3578]: E0517 01:38:52.816650 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:38:56.816977 kubelet[3578]: E0517 01:38:56.816925 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:39:07.817098 kubelet[3578]: E0517 01:39:07.817060 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:39:10.816547 kubelet[3578]: E0517 01:39:10.816514 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:39:21.817326 env[2382]: time="2025-05-17T01:39:21.817282963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:39:21.983533 env[2382]: time="2025-05-17T01:39:21.983478467Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:39:21.983935 env[2382]: time="2025-05-17T01:39:21.983903828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:39:21.984181 kubelet[3578]: E0517 01:39:21.984144 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:39:21.984533 kubelet[3578]: E0517 01:39:21.984512 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:39:21.984710 kubelet[3578]: E0517 01:39:21.984677 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:98d2fa92e9aa4ed787c6df8c04195565,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:39:21.986597 env[2382]: time="2025-05-17T01:39:21.986574556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:39:22.137579 env[2382]: time="2025-05-17T01:39:22.137474331Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:39:22.141286 env[2382]: time="2025-05-17T01:39:22.141244063Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:39:22.141464 kubelet[3578]: E0517 01:39:22.141431 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:39:22.141505 kubelet[3578]: E0517 01:39:22.141476 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:39:22.141648 kubelet[3578]: E0517 01:39:22.141593 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:39:22.142789 kubelet[3578]: E0517 01:39:22.142761 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:39:22.816371 kubelet[3578]: E0517 01:39:22.816346 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:39:35.817268 kubelet[3578]: E0517 01:39:35.817226 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:39:37.817145 kubelet[3578]: E0517 01:39:37.817107 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:39:49.817932 env[2382]: time="2025-05-17T01:39:49.817883323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:39:49.964286 env[2382]: time="2025-05-17T01:39:49.964232030Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:39:49.964705 env[2382]: time="2025-05-17T01:39:49.964673513Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:39:49.964933 kubelet[3578]: E0517 01:39:49.964897 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:39:49.965270 kubelet[3578]: E0517 01:39:49.965249 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:39:49.965475 kubelet[3578]: E0517 01:39:49.965435 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnmns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:39:49.966716 kubelet[3578]: E0517 01:39:49.966689 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:39:50.816854 kubelet[3578]: E0517 01:39:50.816826 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:40:03.816708 kubelet[3578]: E0517 01:40:03.816659 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:40:04.816559 kubelet[3578]: E0517 01:40:04.816525 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:40:16.815993 kubelet[3578]: E0517 01:40:16.815959 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:40:17.816553 kubelet[3578]: E0517 01:40:17.816518 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:40:29.816517 kubelet[3578]: E0517 01:40:29.816478 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:40:30.816862 kubelet[3578]: E0517 01:40:30.816832 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:40:43.816718 kubelet[3578]: E0517 01:40:43.816665 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:40:45.817520 kubelet[3578]: E0517 01:40:45.817485 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:40:58.816952 kubelet[3578]: E0517 01:40:58.816909 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:41:00.816459 kubelet[3578]: E0517 01:41:00.816426 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:41:11.816057 kubelet[3578]: E0517 01:41:11.816019 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:41:15.819693 kubelet[3578]: E0517 01:41:15.819644 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:41:23.816598 kubelet[3578]: E0517 01:41:23.816560 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:41:29.816622 kubelet[3578]: E0517 01:41:29.816549 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:41:38.816178 kubelet[3578]: E0517 01:41:38.816136 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:41:40.817256 kubelet[3578]: E0517 01:41:40.817211 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:41:50.816239 kubelet[3578]: E0517 01:41:50.816193 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:41:55.817723 kubelet[3578]: E0517 01:41:55.817652 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:42:01.817672 kubelet[3578]: E0517 01:42:01.817609 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:42:07.816923 env[2382]: time="2025-05-17T01:42:07.816883711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:42:08.091970 env[2382]: time="2025-05-17T01:42:08.091865080Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:42:08.092165 env[2382]: time="2025-05-17T01:42:08.092138681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:42:08.092327 kubelet[3578]: E0517 01:42:08.092284 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:42:08.092564 kubelet[3578]: E0517 01:42:08.092345 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:42:08.092564 kubelet[3578]: E0517 01:42:08.092467 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:98d2fa92e9aa4ed787c6df8c04195565,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:42:08.094105 env[2382]: time="2025-05-17T01:42:08.094085968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:42:08.229547 env[2382]: time="2025-05-17T01:42:08.229493785Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:42:08.229772 env[2382]: time="2025-05-17T01:42:08.229749146Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:42:08.229925 kubelet[3578]: E0517 01:42:08.229900 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:42:08.229958 kubelet[3578]: E0517 01:42:08.229934 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:42:08.230030 kubelet[3578]: E0517 01:42:08.230000 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:42:08.231149 kubelet[3578]: E0517 01:42:08.231126 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:42:16.817000 kubelet[3578]: E0517 01:42:16.816953 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:42:23.816854 kubelet[3578]: E0517 01:42:23.816807 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:42:29.817345 kubelet[3578]: E0517 01:42:29.817304 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:42:37.816587 kubelet[3578]: E0517 01:42:37.816539 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:42:44.816878 env[2382]: time="2025-05-17T01:42:44.816833050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 17 01:42:44.995003 env[2382]: time="2025-05-17T01:42:44.994935630Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:42:44.995368 env[2382]: time="2025-05-17T01:42:44.995332872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:42:44.995648 kubelet[3578]: E0517 01:42:44.995588 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:42:44.996065 kubelet[3578]: E0517 01:42:44.996041 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 17 01:42:44.996278 kubelet[3578]: E0517 01:42:44.996235 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pnmns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-czxt2_calico-system(c4371095-cd92-4ea8-9564-c86ac0de3064): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:42:44.997532 kubelet[3578]: E0517 01:42:44.997504 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:42:50.817009 kubelet[3578]: E0517 01:42:50.816956 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:42:59.816478 kubelet[3578]: E0517 01:42:59.816433 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:43:02.817165 kubelet[3578]: E0517 01:43:02.817118 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:43:13.816397 kubelet[3578]: E0517 01:43:13.816358 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:43:16.817040 kubelet[3578]: E0517 01:43:16.816992 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:43:27.816075 kubelet[3578]: E0517 01:43:27.816039 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:43:31.817085 kubelet[3578]: E0517 01:43:31.817038 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:43:38.816780 kubelet[3578]: E0517 01:43:38.816739 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:43:46.816714 kubelet[3578]: E0517 01:43:46.816673 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:43:49.816454 kubelet[3578]: E0517 01:43:49.816403 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:44:00.817010 kubelet[3578]: E0517 01:44:00.816967 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:44:03.816833 kubelet[3578]: E0517 01:44:03.816801 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:44:13.817021 kubelet[3578]: E0517 01:44:13.816970 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:44:18.816897 kubelet[3578]: E0517 01:44:18.816849 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:44:24.816445 kubelet[3578]: E0517 01:44:24.816398 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:44:33.816424 kubelet[3578]: E0517 01:44:33.816224 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:44:37.816688 kubelet[3578]: E0517 01:44:37.816646 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:44:48.816248 kubelet[3578]: E0517 01:44:48.816207 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:44:51.817251 kubelet[3578]: E0517 01:44:51.817205 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:45:03.816734 kubelet[3578]: E0517 01:45:03.816696 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:45:04.816273 kubelet[3578]: E0517 01:45:04.816242 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:45:18.816659 kubelet[3578]: E0517 01:45:18.816611 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:45:18.817084 kubelet[3578]: E0517 01:45:18.816849 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:45:33.816162 kubelet[3578]: E0517 01:45:33.816123 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:45:33.816756 kubelet[3578]: E0517 01:45:33.816408 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:45:38.755180 update_engine[2372]: I0517 01:45:38.755137 2372 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs May 17 01:45:38.755180 update_engine[2372]: I0517 01:45:38.755178 2372 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs May 17 01:45:38.755910 update_engine[2372]: I0517 01:45:38.755899 2372 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs May 17 01:45:38.756227 update_engine[2372]: I0517 01:45:38.756217 2372 omaha_request_params.cc:62] Current group set to lts May 17 01:45:38.756350 update_engine[2372]: I0517 01:45:38.756342 2372 update_attempter.cc:499] Already updated boot flags. Skipping. May 17 01:45:38.756350 update_engine[2372]: I0517 01:45:38.756348 2372 update_attempter.cc:643] Scheduling an action processor start. May 17 01:45:38.756394 update_engine[2372]: I0517 01:45:38.756363 2372 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:45:38.756394 update_engine[2372]: I0517 01:45:38.756385 2372 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs May 17 01:45:38.756919 update_engine[2372]: I0517 01:45:38.756906 2372 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 01:45:38.756919 update_engine[2372]: I0517 01:45:38.756915 2372 omaha_request_action.cc:271] Request: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: May 17 01:45:38.756919 update_engine[2372]: I0517 01:45:38.756919 2372 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:45:38.757477 locksmithd[2419]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 May 17 01:45:38.757732 update_engine[2372]: I0517 01:45:38.757719 2372 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:45:38.757807 update_engine[2372]: E0517 01:45:38.757799 2372 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:45:38.757863 update_engine[2372]: I0517 01:45:38.757851 2372 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 May 17 01:45:47.817065 kubelet[3578]: E0517 01:45:47.817020 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:45:48.664851 update_engine[2372]: I0517 01:45:48.664809 2372 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:45:48.665209 update_engine[2372]: I0517 01:45:48.665060 2372 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:45:48.665209 update_engine[2372]: E0517 01:45:48.665136 2372 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:45:48.665209 update_engine[2372]: I0517 01:45:48.665190 2372 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 May 17 01:45:48.818081 kubelet[3578]: E0517 01:45:48.818049 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:45:58.665512 update_engine[2372]: I0517 01:45:58.665458 2372 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:45:58.666035 update_engine[2372]: I0517 01:45:58.665692 2372 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:45:58.666035 update_engine[2372]: E0517 01:45:58.665781 2372 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:45:58.666035 update_engine[2372]: I0517 01:45:58.665851 2372 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 May 17 01:45:58.816680 kubelet[3578]: E0517 01:45:58.816640 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:46:03.816773 kubelet[3578]: E0517 01:46:03.816729 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:46:08.664952 update_engine[2372]: I0517 01:46:08.664907 2372 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:46:08.665339 update_engine[2372]: I0517 01:46:08.665146 2372 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:46:08.665339 update_engine[2372]: E0517 01:46:08.665225 2372 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:46:08.665339 update_engine[2372]: I0517 01:46:08.665271 2372 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:46:08.665339 update_engine[2372]: I0517 01:46:08.665277 2372 omaha_request_action.cc:621] Omaha request response: May 17 01:46:08.665339 update_engine[2372]: E0517 01:46:08.665338 2372 omaha_request_action.cc:640] Omaha request network transfer failed. May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665349 2372 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665351 2372 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665354 2372 update_attempter.cc:306] Processing Done. May 17 01:46:08.665461 update_engine[2372]: E0517 01:46:08.665366 2372 update_attempter.cc:619] Update failed. May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665369 2372 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665372 2372 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665375 2372 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665433 2372 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665449 2372 omaha_request_action.cc:270] Posting an Omaha request to disabled May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665452 2372 omaha_request_action.cc:271] Request: May 17 01:46:08.665461 update_engine[2372]: May 17 01:46:08.665461 update_engine[2372]: May 17 01:46:08.665461 update_engine[2372]: May 17 01:46:08.665461 update_engine[2372]: May 17 01:46:08.665461 update_engine[2372]: May 17 01:46:08.665461 update_engine[2372]: May 17 01:46:08.665461 update_engine[2372]: I0517 01:46:08.665455 2372 libcurl_http_fetcher.cc:47] Starting/Resuming transfer May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665540 2372 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP May 17 01:46:08.665799 update_engine[2372]: E0517 01:46:08.665585 2372 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665625 2372 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665629 2372 omaha_request_action.cc:621] Omaha request response: May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665632 2372 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665635 2372 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665637 2372 update_attempter.cc:306] Processing Done. May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665640 2372 update_attempter.cc:310] Error event sent. May 17 01:46:08.665799 update_engine[2372]: I0517 01:46:08.665648 2372 update_check_scheduler.cc:74] Next update check in 40m46s May 17 01:46:08.665997 locksmithd[2419]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 May 17 01:46:08.665997 locksmithd[2419]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 May 17 01:46:09.817147 kubelet[3578]: E0517 01:46:09.817097 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:46:16.816947 kubelet[3578]: E0517 01:46:16.816900 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:46:20.816441 kubelet[3578]: E0517 01:46:20.816385 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:46:31.816967 kubelet[3578]: E0517 01:46:31.816926 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:46:33.816685 kubelet[3578]: E0517 01:46:33.816651 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:46:41.075584 systemd[1]: Started sshd@7-86.109.9.158:22-147.75.109.163:44308.service. May 17 01:46:41.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-86.109.9.158:22-147.75.109.163:44308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:41.118867 kernel: audit: type=1130 audit(1747446401.073:399): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-86.109.9.158:22-147.75.109.163:44308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:41.348000 audit[11051]: USER_ACCT pid=11051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.351116 sshd[11051]: Accepted publickey for core from 147.75.109.163 port 44308 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:46:41.353130 sshd[11051]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:46:41.358681 systemd-logind[2371]: New session 10 of user core. May 17 01:46:41.359717 systemd[1]: Started session-10.scope. May 17 01:46:41.350000 audit[11051]: CRED_ACQ pid=11051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.454862 kernel: audit: type=1101 audit(1747446401.348:400): pid=11051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.454978 kernel: audit: type=1103 audit(1747446401.350:401): pid=11051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.455027 kernel: audit: type=1006 audit(1747446401.350:402): pid=11051 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 May 17 01:46:41.350000 audit[11051]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6ba03d0 a2=3 a3=1 items=0 ppid=1 pid=11051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:41.535532 kernel: audit: type=1300 audit(1747446401.350:402): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6ba03d0 a2=3 a3=1 items=0 ppid=1 pid=11051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:41.535561 kernel: audit: type=1327 audit(1747446401.350:402): proctitle=737368643A20636F7265205B707269765D May 17 01:46:41.350000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:46:41.360000 audit[11051]: USER_START pid=11051 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.613196 kernel: audit: type=1105 audit(1747446401.360:403): pid=11051 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.613245 kernel: audit: type=1103 audit(1747446401.363:404): pid=11054 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.363000 audit[11054]: CRED_ACQ pid=11054 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.619998 sshd[11051]: pam_unix(sshd:session): session closed for user core May 17 01:46:41.625477 systemd[1]: sshd@7-86.109.9.158:22-147.75.109.163:44308.service: Deactivated successfully. May 17 01:46:41.626859 systemd-logind[2371]: Session 10 logged out. Waiting for processes to exit. May 17 01:46:41.626941 systemd[1]: session-10.scope: Deactivated successfully. May 17 01:46:41.627483 systemd-logind[2371]: Removed session 10. May 17 01:46:41.619000 audit[11051]: USER_END pid=11051 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.713932 kernel: audit: type=1106 audit(1747446401.619:405): pid=11051 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.714009 kernel: audit: type=1104 audit(1747446401.619:406): pid=11051 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.619000 audit[11051]: CRED_DISP pid=11051 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:41.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-86.109.9.158:22-147.75.109.163:44308 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:46.659887 systemd[1]: Started sshd@8-86.109.9.158:22-147.75.109.163:44310.service. May 17 01:46:46.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-86.109.9.158:22-147.75.109.163:44310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:46.671711 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:46:46.671969 kernel: audit: type=1130 audit(1747446406.658:408): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-86.109.9.158:22-147.75.109.163:44310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:46.816216 kubelet[3578]: E0517 01:46:46.816182 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:46:46.816524 kubelet[3578]: E0517 01:46:46.816500 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:46:46.931000 audit[11084]: USER_ACCT pid=11084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:46.933624 sshd[11084]: Accepted publickey for core from 147.75.109.163 port 44310 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:46:46.935575 sshd[11084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:46:46.940692 systemd-logind[2371]: New session 11 of user core. May 17 01:46:46.941811 systemd[1]: Started session-11.scope. May 17 01:46:46.933000 audit[11084]: CRED_ACQ pid=11084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.038141 kernel: audit: type=1101 audit(1747446406.931:409): pid=11084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.038191 kernel: audit: type=1103 audit(1747446406.933:410): pid=11084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.038281 kernel: audit: type=1006 audit(1747446406.933:411): pid=11084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 May 17 01:46:46.933000 audit[11084]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2252ba0 a2=3 a3=1 items=0 ppid=1 pid=11084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:47.119047 kernel: audit: type=1300 audit(1747446406.933:411): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc2252ba0 a2=3 a3=1 items=0 ppid=1 pid=11084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:46.933000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:46:47.140779 kernel: audit: type=1327 audit(1747446406.933:411): proctitle=737368643A20636F7265205B707269765D May 17 01:46:46.942000 audit[11084]: USER_START pid=11084 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.196622 kernel: audit: type=1105 audit(1747446406.942:412): pid=11084 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.196733 kernel: audit: type=1103 audit(1747446406.945:413): pid=11087 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:46.945000 audit[11087]: CRED_ACQ pid=11087 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.201625 sshd[11084]: pam_unix(sshd:session): session closed for user core May 17 01:46:47.206989 systemd[1]: sshd@8-86.109.9.158:22-147.75.109.163:44310.service: Deactivated successfully. May 17 01:46:47.208312 systemd-logind[2371]: Session 11 logged out. Waiting for processes to exit. May 17 01:46:47.208378 systemd[1]: session-11.scope: Deactivated successfully. May 17 01:46:47.208962 systemd-logind[2371]: Removed session 11. May 17 01:46:47.200000 audit[11084]: USER_END pid=11084 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.297290 kernel: audit: type=1106 audit(1747446407.200:414): pid=11084 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.297331 kernel: audit: type=1104 audit(1747446407.200:415): pid=11084 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.200000 audit[11084]: CRED_DISP pid=11084 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:47.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-86.109.9.158:22-147.75.109.163:44310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:52.236342 systemd[1]: Started sshd@9-86.109.9.158:22-147.75.109.163:56608.service. May 17 01:46:52.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-86.109.9.158:22-147.75.109.163:56608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:52.248103 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:46:52.248392 kernel: audit: type=1130 audit(1747446412.234:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-86.109.9.158:22-147.75.109.163:56608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:52.479000 audit[11122]: USER_ACCT pid=11122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.481903 sshd[11122]: Accepted publickey for core from 147.75.109.163 port 56608 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:46:52.483769 sshd[11122]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:46:52.488712 systemd-logind[2371]: New session 12 of user core. May 17 01:46:52.489698 systemd[1]: Started session-12.scope. May 17 01:46:52.481000 audit[11122]: CRED_ACQ pid=11122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.586132 kernel: audit: type=1101 audit(1747446412.479:418): pid=11122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.586216 kernel: audit: type=1103 audit(1747446412.481:419): pid=11122 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.586260 kernel: audit: type=1006 audit(1747446412.481:420): pid=11122 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 May 17 01:46:52.481000 audit[11122]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2c7b320 a2=3 a3=1 items=0 ppid=1 pid=11122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:52.667955 kernel: audit: type=1300 audit(1747446412.481:420): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff2c7b320 a2=3 a3=1 items=0 ppid=1 pid=11122 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:52.668025 kernel: audit: type=1327 audit(1747446412.481:420): proctitle=737368643A20636F7265205B707269765D May 17 01:46:52.481000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:46:52.490000 audit[11122]: USER_START pid=11122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.718377 sshd[11122]: pam_unix(sshd:session): session closed for user core May 17 01:46:52.720587 systemd[1]: sshd@9-86.109.9.158:22-147.75.109.163:56608.service: Deactivated successfully. May 17 01:46:52.721486 systemd-logind[2371]: Session 12 logged out. Waiting for processes to exit. May 17 01:46:52.721555 systemd[1]: session-12.scope: Deactivated successfully. May 17 01:46:52.722057 systemd-logind[2371]: Removed session 12. May 17 01:46:52.745475 kernel: audit: type=1105 audit(1747446412.490:421): pid=11122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.745584 kernel: audit: type=1103 audit(1747446412.492:422): pid=11124 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.492000 audit[11124]: CRED_ACQ pid=11124 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.755557 systemd[1]: Started sshd@10-86.109.9.158:22-147.75.109.163:56624.service. May 17 01:46:52.717000 audit[11122]: USER_END pid=11122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.845527 kernel: audit: type=1106 audit(1747446412.717:423): pid=11122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.845588 kernel: audit: type=1104 audit(1747446412.717:424): pid=11122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.717000 audit[11122]: CRED_DISP pid=11122 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-86.109.9.158:22-147.75.109.163:56608 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:52.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-86.109.9.158:22-147.75.109.163:56624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:52.997000 audit[11157]: USER_ACCT pid=11157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.999730 sshd[11157]: Accepted publickey for core from 147.75.109.163 port 56624 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:46:52.998000 audit[11157]: CRED_ACQ pid=11157 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:52.998000 audit[11157]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc34d9290 a2=3 a3=1 items=0 ppid=1 pid=11157 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:52.998000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:46:53.000706 sshd[11157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:46:53.003402 systemd-logind[2371]: New session 13 of user core. May 17 01:46:53.004228 systemd[1]: Started session-13.scope. May 17 01:46:53.004000 audit[11157]: USER_START pid=11157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.005000 audit[11161]: CRED_ACQ pid=11161 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.258157 sshd[11157]: pam_unix(sshd:session): session closed for user core May 17 01:46:53.256000 audit[11157]: USER_END pid=11157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.257000 audit[11157]: CRED_DISP pid=11157 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.260297 systemd[1]: sshd@10-86.109.9.158:22-147.75.109.163:56624.service: Deactivated successfully. May 17 01:46:53.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-86.109.9.158:22-147.75.109.163:56624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:53.261174 systemd-logind[2371]: Session 13 logged out. Waiting for processes to exit. May 17 01:46:53.261244 systemd[1]: session-13.scope: Deactivated successfully. May 17 01:46:53.261811 systemd-logind[2371]: Removed session 13. May 17 01:46:53.308186 systemd[1]: Started sshd@11-86.109.9.158:22-147.75.109.163:56626.service. May 17 01:46:53.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-86.109.9.158:22-147.75.109.163:56626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:53.605000 audit[11192]: USER_ACCT pid=11192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.607988 sshd[11192]: Accepted publickey for core from 147.75.109.163 port 56626 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:46:53.606000 audit[11192]: CRED_ACQ pid=11192 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.606000 audit[11192]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3160440 a2=3 a3=1 items=0 ppid=1 pid=11192 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:53.606000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:46:53.608883 sshd[11192]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:46:53.611524 systemd-logind[2371]: New session 14 of user core. May 17 01:46:53.612336 systemd[1]: Started session-14.scope. May 17 01:46:53.612000 audit[11192]: USER_START pid=11192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.613000 audit[11195]: CRED_ACQ pid=11195 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.882030 sshd[11192]: pam_unix(sshd:session): session closed for user core May 17 01:46:53.880000 audit[11192]: USER_END pid=11192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.880000 audit[11192]: CRED_DISP pid=11192 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:53.884044 systemd[1]: sshd@11-86.109.9.158:22-147.75.109.163:56626.service: Deactivated successfully. May 17 01:46:53.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-86.109.9.158:22-147.75.109.163:56626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:53.884902 systemd-logind[2371]: Session 14 logged out. Waiting for processes to exit. May 17 01:46:53.884979 systemd[1]: session-14.scope: Deactivated successfully. May 17 01:46:53.885532 systemd-logind[2371]: Removed session 14. May 17 01:46:58.816868 kubelet[3578]: E0517 01:46:58.816825 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:46:58.918328 systemd[1]: Started sshd@12-86.109.9.158:22-147.75.109.163:39266.service. May 17 01:46:58.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-86.109.9.158:22-147.75.109.163:39266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:58.930189 kernel: kauditd_printk_skb: 23 callbacks suppressed May 17 01:46:58.930387 kernel: audit: type=1130 audit(1747446418.917:444): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-86.109.9.158:22-147.75.109.163:39266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:46:59.189000 audit[11231]: USER_ACCT pid=11231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.191011 sshd[11231]: Accepted publickey for core from 147.75.109.163 port 39266 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:46:59.192949 sshd[11231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:46:59.198104 systemd-logind[2371]: New session 15 of user core. May 17 01:46:59.199230 systemd[1]: Started session-15.scope. May 17 01:46:59.191000 audit[11231]: CRED_ACQ pid=11231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.295227 kernel: audit: type=1101 audit(1747446419.189:445): pid=11231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.295289 kernel: audit: type=1103 audit(1747446419.191:446): pid=11231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.295341 kernel: audit: type=1006 audit(1747446419.191:447): pid=11231 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 May 17 01:46:59.191000 audit[11231]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd81ab2c0 a2=3 a3=1 items=0 ppid=1 pid=11231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:59.376245 kernel: audit: type=1300 audit(1747446419.191:447): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd81ab2c0 a2=3 a3=1 items=0 ppid=1 pid=11231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:46:59.376314 kernel: audit: type=1327 audit(1747446419.191:447): proctitle=737368643A20636F7265205B707269765D May 17 01:46:59.191000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:46:59.201000 audit[11231]: USER_START pid=11231 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.450585 sshd[11231]: pam_unix(sshd:session): session closed for user core May 17 01:46:59.452779 systemd[1]: sshd@12-86.109.9.158:22-147.75.109.163:39266.service: Deactivated successfully. May 17 01:46:59.453665 systemd-logind[2371]: Session 15 logged out. Waiting for processes to exit. May 17 01:46:59.453738 systemd[1]: session-15.scope: Deactivated successfully. May 17 01:46:59.453905 kernel: audit: type=1105 audit(1747446419.201:448): pid=11231 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.453947 kernel: audit: type=1103 audit(1747446419.204:449): pid=11234 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.204000 audit[11234]: CRED_ACQ pid=11234 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.454549 systemd-logind[2371]: Removed session 15. May 17 01:46:59.450000 audit[11231]: USER_END pid=11231 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.553599 kernel: audit: type=1106 audit(1747446419.450:450): pid=11231 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.553664 kernel: audit: type=1104 audit(1747446419.450:451): pid=11231 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.450000 audit[11231]: CRED_DISP pid=11231 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:46:59.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-86.109.9.158:22-147.75.109.163:39266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:01.817023 kubelet[3578]: E0517 01:47:01.816977 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:47:04.486120 systemd[1]: Started sshd@13-86.109.9.158:22-147.75.109.163:39278.service. May 17 01:47:04.485000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-86.109.9.158:22-147.75.109.163:39278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:04.497872 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:47:04.498081 kernel: audit: type=1130 audit(1747446424.485:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-86.109.9.158:22-147.75.109.163:39278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:04.737000 audit[11286]: USER_ACCT pid=11286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.738517 sshd[11286]: Accepted publickey for core from 147.75.109.163 port 39278 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:04.740347 sshd[11286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:04.745629 systemd-logind[2371]: New session 16 of user core. May 17 01:47:04.746604 systemd[1]: Started session-16.scope. May 17 01:47:04.739000 audit[11286]: CRED_ACQ pid=11286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.842747 kernel: audit: type=1101 audit(1747446424.737:454): pid=11286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.842792 kernel: audit: type=1103 audit(1747446424.739:455): pid=11286 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.842853 kernel: audit: type=1006 audit(1747446424.739:456): pid=11286 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 May 17 01:47:04.739000 audit[11286]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4ed88e0 a2=3 a3=1 items=0 ppid=1 pid=11286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:04.924556 kernel: audit: type=1300 audit(1747446424.739:456): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4ed88e0 a2=3 a3=1 items=0 ppid=1 pid=11286 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:04.924606 kernel: audit: type=1327 audit(1747446424.739:456): proctitle=737368643A20636F7265205B707269765D May 17 01:47:04.739000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:04.748000 audit[11286]: USER_START pid=11286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.977965 sshd[11286]: pam_unix(sshd:session): session closed for user core May 17 01:47:04.980226 systemd[1]: sshd@13-86.109.9.158:22-147.75.109.163:39278.service: Deactivated successfully. May 17 01:47:04.981133 systemd-logind[2371]: Session 16 logged out. Waiting for processes to exit. May 17 01:47:04.981206 systemd[1]: session-16.scope: Deactivated successfully. May 17 01:47:04.982000 systemd-logind[2371]: Removed session 16. May 17 01:47:05.001788 kernel: audit: type=1105 audit(1747446424.748:457): pid=11286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:05.001839 kernel: audit: type=1103 audit(1747446424.751:458): pid=11289 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.751000 audit[11289]: CRED_ACQ pid=11289 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.977000 audit[11286]: USER_END pid=11286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:05.101353 kernel: audit: type=1106 audit(1747446424.977:459): pid=11286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:05.101404 kernel: audit: type=1104 audit(1747446424.977:460): pid=11286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.977000 audit[11286]: CRED_DISP pid=11286 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:04.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-86.109.9.158:22-147.75.109.163:39278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:10.029925 systemd[1]: Started sshd@14-86.109.9.158:22-147.75.109.163:41530.service. May 17 01:47:10.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-86.109.9.158:22-147.75.109.163:41530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:10.041737 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:47:10.041966 kernel: audit: type=1130 audit(1747446430.029:462): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-86.109.9.158:22-147.75.109.163:41530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:10.333000 audit[11361]: USER_ACCT pid=11361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.334913 sshd[11361]: Accepted publickey for core from 147.75.109.163 port 41530 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:10.336750 sshd[11361]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:10.341952 systemd-logind[2371]: New session 17 of user core. May 17 01:47:10.342963 systemd[1]: Started session-17.scope. May 17 01:47:10.335000 audit[11361]: CRED_ACQ pid=11361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.439095 kernel: audit: type=1101 audit(1747446430.333:463): pid=11361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.439172 kernel: audit: type=1103 audit(1747446430.335:464): pid=11361 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.439241 kernel: audit: type=1006 audit(1747446430.335:465): pid=11361 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 May 17 01:47:10.335000 audit[11361]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc590b900 a2=3 a3=1 items=0 ppid=1 pid=11361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:10.520042 kernel: audit: type=1300 audit(1747446430.335:465): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc590b900 a2=3 a3=1 items=0 ppid=1 pid=11361 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:10.520181 kernel: audit: type=1327 audit(1747446430.335:465): proctitle=737368643A20636F7265205B707269765D May 17 01:47:10.335000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:10.345000 audit[11361]: USER_START pid=11361 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.597695 kernel: audit: type=1105 audit(1747446430.345:466): pid=11361 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.597766 kernel: audit: type=1103 audit(1747446430.347:467): pid=11364 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.347000 audit[11364]: CRED_ACQ pid=11364 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.612156 sshd[11361]: pam_unix(sshd:session): session closed for user core May 17 01:47:10.617389 systemd[1]: sshd@14-86.109.9.158:22-147.75.109.163:41530.service: Deactivated successfully. May 17 01:47:10.618305 systemd-logind[2371]: Session 17 logged out. Waiting for processes to exit. May 17 01:47:10.618382 systemd[1]: session-17.scope: Deactivated successfully. May 17 01:47:10.619162 systemd-logind[2371]: Removed session 17. May 17 01:47:10.612000 audit[11361]: USER_END pid=11361 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.697993 kernel: audit: type=1106 audit(1747446430.612:468): pid=11361 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.698073 kernel: audit: type=1104 audit(1747446430.612:469): pid=11361 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.612000 audit[11361]: CRED_DISP pid=11361 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:10.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-86.109.9.158:22-147.75.109.163:41530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:11.816818 kubelet[3578]: E0517 01:47:11.816782 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:47:15.645928 systemd[1]: Started sshd@15-86.109.9.158:22-147.75.109.163:41536.service. May 17 01:47:15.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-86.109.9.158:22-147.75.109.163:41536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:15.657723 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:47:15.657962 kernel: audit: type=1130 audit(1747446435.645:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-86.109.9.158:22-147.75.109.163:41536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:15.920000 audit[11401]: USER_ACCT pid=11401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:15.921947 sshd[11401]: Accepted publickey for core from 147.75.109.163 port 41536 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:15.923716 sshd[11401]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:15.928973 systemd-logind[2371]: New session 18 of user core. May 17 01:47:15.929956 systemd[1]: Started session-18.scope. May 17 01:47:15.922000 audit[11401]: CRED_ACQ pid=11401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.026115 kernel: audit: type=1101 audit(1747446435.920:472): pid=11401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.026191 kernel: audit: type=1103 audit(1747446435.922:473): pid=11401 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.026243 kernel: audit: type=1006 audit(1747446435.922:474): pid=11401 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 May 17 01:47:15.922000 audit[11401]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5e93340 a2=3 a3=1 items=0 ppid=1 pid=11401 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:16.107092 kernel: audit: type=1300 audit(1747446435.922:474): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5e93340 a2=3 a3=1 items=0 ppid=1 pid=11401 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:16.107159 kernel: audit: type=1327 audit(1747446435.922:474): proctitle=737368643A20636F7265205B707269765D May 17 01:47:15.922000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:15.932000 audit[11401]: USER_START pid=11401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.180728 sshd[11401]: pam_unix(sshd:session): session closed for user core May 17 01:47:16.182918 systemd[1]: sshd@15-86.109.9.158:22-147.75.109.163:41536.service: Deactivated successfully. May 17 01:47:16.183809 systemd-logind[2371]: Session 18 logged out. Waiting for processes to exit. May 17 01:47:16.183891 systemd[1]: session-18.scope: Deactivated successfully. May 17 01:47:16.184626 kernel: audit: type=1105 audit(1747446435.932:475): pid=11401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.184652 kernel: audit: type=1103 audit(1747446435.934:476): pid=11404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:15.934000 audit[11404]: CRED_ACQ pid=11404 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.184662 systemd-logind[2371]: Removed session 18. May 17 01:47:16.180000 audit[11401]: USER_END pid=11401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.232369 systemd[1]: Started sshd@16-86.109.9.158:22-147.75.109.163:41544.service. May 17 01:47:16.284969 kernel: audit: type=1106 audit(1747446436.180:477): pid=11401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.285070 kernel: audit: type=1104 audit(1747446436.180:478): pid=11401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.180000 audit[11401]: CRED_DISP pid=11401 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-86.109.9.158:22-147.75.109.163:41536 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:16.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-86.109.9.158:22-147.75.109.163:41544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:16.532000 audit[11434]: USER_ACCT pid=11434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.533750 sshd[11434]: Accepted publickey for core from 147.75.109.163 port 41544 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:16.533000 audit[11434]: CRED_ACQ pid=11434 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.533000 audit[11434]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7094840 a2=3 a3=1 items=0 ppid=1 pid=11434 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:16.533000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:16.534749 sshd[11434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:16.537753 systemd-logind[2371]: New session 19 of user core. May 17 01:47:16.538621 systemd[1]: Started session-19.scope. May 17 01:47:16.541000 audit[11434]: USER_START pid=11434 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.542000 audit[11438]: CRED_ACQ pid=11438 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.817172 env[2382]: time="2025-05-17T01:47:16.817076899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 17 01:47:16.833125 sshd[11434]: pam_unix(sshd:session): session closed for user core May 17 01:47:16.832000 audit[11434]: USER_END pid=11434 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.833000 audit[11434]: CRED_DISP pid=11434 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:16.835291 systemd[1]: sshd@16-86.109.9.158:22-147.75.109.163:41544.service: Deactivated successfully. May 17 01:47:16.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-86.109.9.158:22-147.75.109.163:41544 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:16.836192 systemd-logind[2371]: Session 19 logged out. Waiting for processes to exit. May 17 01:47:16.836255 systemd[1]: session-19.scope: Deactivated successfully. May 17 01:47:16.837037 systemd-logind[2371]: Removed session 19. May 17 01:47:16.863848 systemd[1]: Started sshd@17-86.109.9.158:22-147.75.109.163:41550.service. May 17 01:47:16.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-86.109.9.158:22-147.75.109.163:41550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:16.940533 env[2382]: time="2025-05-17T01:47:16.940483056Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:47:16.940778 env[2382]: time="2025-05-17T01:47:16.940753978Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:47:16.940931 kubelet[3578]: E0517 01:47:16.940884 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:47:16.941185 kubelet[3578]: E0517 01:47:16.940940 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 17 01:47:16.941185 kubelet[3578]: E0517 01:47:16.941030 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:98d2fa92e9aa4ed787c6df8c04195565,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:47:16.943360 env[2382]: time="2025-05-17T01:47:16.943326593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 17 01:47:17.102620 env[2382]: time="2025-05-17T01:47:17.102492796Z" level=info msg="trying next host" error="failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" host=ghcr.io May 17 01:47:17.102823 env[2382]: time="2025-05-17T01:47:17.102797638Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" May 17 01:47:17.102988 kubelet[3578]: E0517 01:47:17.102956 3578 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:47:17.103048 kubelet[3578]: E0517 01:47:17.102997 3578 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 17 01:47:17.103127 kubelet[3578]: E0517 01:47:17.103094 3578 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nnbsj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7cfb685fbd-76hs6_calico-system(34828fa1-51fa-4503-8a6b-b736774334d2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden" logger="UnhandledError" May 17 01:47:17.104267 kubelet[3578]: E0517 01:47:17.104237 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status: 403 Forbidden\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:47:17.107000 audit[11464]: USER_ACCT pid=11464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:17.108631 sshd[11464]: Accepted publickey for core from 147.75.109.163 port 41550 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:17.108000 audit[11464]: CRED_ACQ pid=11464 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:17.108000 audit[11464]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffcda4e60 a2=3 a3=1 items=0 ppid=1 pid=11464 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:17.108000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:17.109773 sshd[11464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:17.112689 systemd-logind[2371]: New session 20 of user core. May 17 01:47:17.113559 systemd[1]: Started session-20.scope. May 17 01:47:17.115000 audit[11464]: USER_START pid=11464 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:17.117000 audit[11467]: CRED_ACQ pid=11467 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:18.745000 audit[11556]: NETFILTER_CFG table=filter:119 family=2 entries=24 op=nft_register_rule pid=11556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:18.745000 audit[11556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=fffff6c9d9a0 a2=0 a3=1 items=0 ppid=3808 pid=11556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:18.745000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:18.756000 audit[11556]: NETFILTER_CFG table=nat:120 family=2 entries=22 op=nft_register_rule pid=11556 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:18.756000 audit[11556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffff6c9d9a0 a2=0 a3=1 items=0 ppid=3808 pid=11556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:18.756000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:18.776000 audit[11558]: NETFILTER_CFG table=filter:121 family=2 entries=36 op=nft_register_rule pid=11558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:18.776000 audit[11558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=13432 a0=3 a1=fffffb614150 a2=0 a3=1 items=0 ppid=3808 pid=11558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:18.776000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:18.789020 sshd[11464]: pam_unix(sshd:session): session closed for user core May 17 01:47:18.788000 audit[11464]: USER_END pid=11464 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:18.788000 audit[11464]: CRED_DISP pid=11464 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:18.791185 systemd[1]: sshd@17-86.109.9.158:22-147.75.109.163:41550.service: Deactivated successfully. May 17 01:47:18.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-86.109.9.158:22-147.75.109.163:41550 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:18.792063 systemd-logind[2371]: Session 20 logged out. Waiting for processes to exit. May 17 01:47:18.792135 systemd[1]: session-20.scope: Deactivated successfully. May 17 01:47:18.792685 systemd-logind[2371]: Removed session 20. May 17 01:47:18.795000 audit[11558]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=11558 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:18.795000 audit[11558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=fffffb614150 a2=0 a3=1 items=0 ppid=3808 pid=11558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:18.795000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:18.839200 systemd[1]: Started sshd@18-86.109.9.158:22-147.75.109.163:50250.service. May 17 01:47:18.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-86.109.9.158:22-147.75.109.163:50250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:19.141000 audit[11561]: USER_ACCT pid=11561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.142303 sshd[11561]: Accepted publickey for core from 147.75.109.163 port 50250 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:19.142000 audit[11561]: CRED_ACQ pid=11561 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.142000 audit[11561]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffdcd0ad0 a2=3 a3=1 items=0 ppid=1 pid=11561 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:19.142000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:19.143317 sshd[11561]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:19.146127 systemd-logind[2371]: New session 21 of user core. May 17 01:47:19.146999 systemd[1]: Started session-21.scope. May 17 01:47:19.148000 audit[11561]: USER_START pid=11561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.149000 audit[11564]: CRED_ACQ pid=11564 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.501300 sshd[11561]: pam_unix(sshd:session): session closed for user core May 17 01:47:19.501000 audit[11561]: USER_END pid=11561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.501000 audit[11561]: CRED_DISP pid=11561 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.503498 systemd[1]: sshd@18-86.109.9.158:22-147.75.109.163:50250.service: Deactivated successfully. May 17 01:47:19.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-86.109.9.158:22-147.75.109.163:50250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:19.504381 systemd-logind[2371]: Session 21 logged out. Waiting for processes to exit. May 17 01:47:19.504452 systemd[1]: session-21.scope: Deactivated successfully. May 17 01:47:19.504928 systemd-logind[2371]: Removed session 21. May 17 01:47:19.546921 systemd[1]: Started sshd@19-86.109.9.158:22-147.75.109.163:50256.service. May 17 01:47:19.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-86.109.9.158:22-147.75.109.163:50256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:19.848000 audit[11609]: USER_ACCT pid=11609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.849665 sshd[11609]: Accepted publickey for core from 147.75.109.163 port 50256 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:19.849000 audit[11609]: CRED_ACQ pid=11609 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.849000 audit[11609]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3d6e960 a2=3 a3=1 items=0 ppid=1 pid=11609 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:19.849000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:19.850656 sshd[11609]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:19.853495 systemd-logind[2371]: New session 22 of user core. May 17 01:47:19.854379 systemd[1]: Started session-22.scope. May 17 01:47:19.855000 audit[11609]: USER_START pid=11609 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:19.857000 audit[11612]: CRED_ACQ pid=11612 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:20.121598 sshd[11609]: pam_unix(sshd:session): session closed for user core May 17 01:47:20.121000 audit[11609]: USER_END pid=11609 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:20.121000 audit[11609]: CRED_DISP pid=11609 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:20.123716 systemd[1]: sshd@19-86.109.9.158:22-147.75.109.163:50256.service: Deactivated successfully. May 17 01:47:20.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-86.109.9.158:22-147.75.109.163:50256 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:20.124590 systemd-logind[2371]: Session 22 logged out. Waiting for processes to exit. May 17 01:47:20.124656 systemd[1]: session-22.scope: Deactivated successfully. May 17 01:47:20.125185 systemd-logind[2371]: Removed session 22. May 17 01:47:22.816864 kubelet[3578]: E0517 01:47:22.816823 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:47:23.986000 audit[11647]: NETFILTER_CFG table=filter:123 family=2 entries=24 op=nft_register_rule pid=11647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:23.998669 kernel: kauditd_printk_skb: 57 callbacks suppressed May 17 01:47:23.998846 kernel: audit: type=1325 audit(1747446443.986:520): table=filter:123 family=2 entries=24 op=nft_register_rule pid=11647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:23.986000 audit[11647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffcb613410 a2=0 a3=1 items=0 ppid=3808 pid=11647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:24.083919 kernel: audit: type=1300 audit(1747446443.986:520): arch=c00000b7 syscall=211 success=yes exit=4504 a0=3 a1=ffffcb613410 a2=0 a3=1 items=0 ppid=3808 pid=11647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:24.084001 kernel: audit: type=1327 audit(1747446443.986:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:23.986000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:24.113000 audit[11647]: NETFILTER_CFG table=nat:124 family=2 entries=106 op=nft_register_chain pid=11647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:24.113000 audit[11647]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffcb613410 a2=0 a3=1 items=0 ppid=3808 pid=11647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:24.199834 kernel: audit: type=1325 audit(1747446444.113:521): table=nat:124 family=2 entries=106 op=nft_register_chain pid=11647 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" May 17 01:47:24.199861 kernel: audit: type=1300 audit(1747446444.113:521): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffcb613410 a2=0 a3=1 items=0 ppid=3808 pid=11647 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:24.199902 kernel: audit: type=1327 audit(1747446444.113:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:24.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 May 17 01:47:25.157753 systemd[1]: Started sshd@20-86.109.9.158:22-147.75.109.163:50260.service. May 17 01:47:25.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-86.109.9.158:22-147.75.109.163:50260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:25.201866 kernel: audit: type=1130 audit(1747446445.157:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-86.109.9.158:22-147.75.109.163:50260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:25.429000 audit[11650]: USER_ACCT pid=11650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.431278 sshd[11650]: Accepted publickey for core from 147.75.109.163 port 50260 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:25.433077 sshd[11650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:25.438132 systemd-logind[2371]: New session 23 of user core. May 17 01:47:25.439108 systemd[1]: Started session-23.scope. May 17 01:47:25.431000 audit[11650]: CRED_ACQ pid=11650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.535132 kernel: audit: type=1101 audit(1747446445.429:523): pid=11650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.535177 kernel: audit: type=1103 audit(1747446445.431:524): pid=11650 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.563407 kernel: audit: type=1006 audit(1747446445.431:525): pid=11650 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 May 17 01:47:25.431000 audit[11650]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffe5f8d90 a2=3 a3=1 items=0 ppid=1 pid=11650 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:25.431000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:25.440000 audit[11650]: USER_START pid=11650 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.443000 audit[11653]: CRED_ACQ pid=11653 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.685910 sshd[11650]: pam_unix(sshd:session): session closed for user core May 17 01:47:25.685000 audit[11650]: USER_END pid=11650 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.685000 audit[11650]: CRED_DISP pid=11650 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:25.687934 systemd[1]: sshd@20-86.109.9.158:22-147.75.109.163:50260.service: Deactivated successfully. May 17 01:47:25.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-86.109.9.158:22-147.75.109.163:50260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:25.688793 systemd-logind[2371]: Session 23 logged out. Waiting for processes to exit. May 17 01:47:25.688871 systemd[1]: session-23.scope: Deactivated successfully. May 17 01:47:25.689352 systemd-logind[2371]: Removed session 23. May 17 01:47:30.725632 systemd[1]: Started sshd@21-86.109.9.158:22-147.75.109.163:53740.service. May 17 01:47:30.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-86.109.9.158:22-147.75.109.163:53740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:30.737372 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 01:47:30.737551 kernel: audit: type=1130 audit(1747446450.724:531): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-86.109.9.158:22-147.75.109.163:53740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:30.994000 audit[11686]: USER_ACCT pid=11686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:30.995869 sshd[11686]: Accepted publickey for core from 147.75.109.163 port 53740 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:30.997714 sshd[11686]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:31.002998 systemd-logind[2371]: New session 24 of user core. May 17 01:47:31.003986 systemd[1]: Started session-24.scope. May 17 01:47:30.996000 audit[11686]: CRED_ACQ pid=11686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.099693 kernel: audit: type=1101 audit(1747446450.994:532): pid=11686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.099804 kernel: audit: type=1103 audit(1747446450.996:533): pid=11686 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.128449 kernel: audit: type=1006 audit(1747446450.996:534): pid=11686 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 May 17 01:47:31.128580 kernel: audit: type=1300 audit(1747446450.996:534): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff78326c0 a2=3 a3=1 items=0 ppid=1 pid=11686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:30.996000 audit[11686]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff78326c0 a2=3 a3=1 items=0 ppid=1 pid=11686 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:30.996000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:31.202984 kernel: audit: type=1327 audit(1747446450.996:534): proctitle=737368643A20636F7265205B707269765D May 17 01:47:31.203112 kernel: audit: type=1105 audit(1747446451.005:535): pid=11686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.005000 audit[11686]: USER_START pid=11686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.254251 sshd[11686]: pam_unix(sshd:session): session closed for user core May 17 01:47:31.256581 systemd[1]: sshd@21-86.109.9.158:22-147.75.109.163:53740.service: Deactivated successfully. May 17 01:47:31.257481 systemd-logind[2371]: Session 24 logged out. Waiting for processes to exit. May 17 01:47:31.257549 systemd[1]: session-24.scope: Deactivated successfully. May 17 01:47:31.258081 systemd-logind[2371]: Removed session 24. May 17 01:47:31.008000 audit[11689]: CRED_ACQ pid=11689 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.302570 kernel: audit: type=1103 audit(1747446451.008:536): pid=11689 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.302659 kernel: audit: type=1106 audit(1747446451.254:537): pid=11686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.254000 audit[11686]: USER_END pid=11686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.254000 audit[11686]: CRED_DISP pid=11686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.402036 kernel: audit: type=1104 audit(1747446451.254:538): pid=11686 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:31.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-86.109.9.158:22-147.75.109.163:53740 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:31.818992 kubelet[3578]: E0517 01:47:31.818911 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2" May 17 01:47:36.304447 systemd[1]: Started sshd@22-86.109.9.158:22-147.75.109.163:53742.service. May 17 01:47:36.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-86.109.9.158:22-147.75.109.163:53742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:36.316223 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:47:36.316437 kernel: audit: type=1130 audit(1747446456.303:540): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-86.109.9.158:22-147.75.109.163:53742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:36.604000 audit[11772]: USER_ACCT pid=11772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.606271 sshd[11772]: Accepted publickey for core from 147.75.109.163 port 53742 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:36.608105 sshd[11772]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:36.613264 systemd-logind[2371]: New session 25 of user core. May 17 01:47:36.614262 systemd[1]: Started session-25.scope. May 17 01:47:36.606000 audit[11772]: CRED_ACQ pid=11772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.710397 kernel: audit: type=1101 audit(1747446456.604:541): pid=11772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.710445 kernel: audit: type=1103 audit(1747446456.606:542): pid=11772 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.710487 kernel: audit: type=1006 audit(1747446456.606:543): pid=11772 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 May 17 01:47:36.606000 audit[11772]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff92241d0 a2=3 a3=1 items=0 ppid=1 pid=11772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:36.791263 kernel: audit: type=1300 audit(1747446456.606:543): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff92241d0 a2=3 a3=1 items=0 ppid=1 pid=11772 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:36.791309 kernel: audit: type=1327 audit(1747446456.606:543): proctitle=737368643A20636F7265205B707269765D May 17 01:47:36.606000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:36.615000 audit[11772]: USER_START pid=11772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.816468 kubelet[3578]: E0517 01:47:36.816442 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-czxt2" podUID="c4371095-cd92-4ea8-9564-c86ac0de3064" May 17 01:47:36.868845 kernel: audit: type=1105 audit(1747446456.615:544): pid=11772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.868908 kernel: audit: type=1103 audit(1747446456.618:545): pid=11776 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.618000 audit[11776]: CRED_ACQ pid=11776 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.877454 sshd[11772]: pam_unix(sshd:session): session closed for user core May 17 01:47:36.882614 systemd[1]: sshd@22-86.109.9.158:22-147.75.109.163:53742.service: Deactivated successfully. May 17 01:47:36.883917 systemd-logind[2371]: Session 25 logged out. Waiting for processes to exit. May 17 01:47:36.883988 systemd[1]: session-25.scope: Deactivated successfully. May 17 01:47:36.884554 systemd-logind[2371]: Removed session 25. May 17 01:47:36.877000 audit[11772]: USER_END pid=11772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.969424 kernel: audit: type=1106 audit(1747446456.877:546): pid=11772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.969491 kernel: audit: type=1104 audit(1747446456.877:547): pid=11772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.877000 audit[11772]: CRED_DISP pid=11772 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:36.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-86.109.9.158:22-147.75.109.163:53742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:41.911104 systemd[1]: Started sshd@23-86.109.9.158:22-147.75.109.163:37446.service. May 17 01:47:41.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-86.109.9.158:22-147.75.109.163:37446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:41.922891 kernel: kauditd_printk_skb: 1 callbacks suppressed May 17 01:47:41.923105 kernel: audit: type=1130 audit(1747446461.910:549): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-86.109.9.158:22-147.75.109.163:37446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:42.179000 audit[11832]: USER_ACCT pid=11832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.180854 sshd[11832]: Accepted publickey for core from 147.75.109.163 port 37446 ssh2: RSA SHA256:fZ9RKMPoNtsGsuPDx2ZfBlD6iQ7iXI7TVtnVuOXHzRg May 17 01:47:42.182704 sshd[11832]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 01:47:42.187801 systemd-logind[2371]: New session 26 of user core. May 17 01:47:42.188876 systemd[1]: Started session-26.scope. May 17 01:47:42.181000 audit[11832]: CRED_ACQ pid=11832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.285054 kernel: audit: type=1101 audit(1747446462.179:550): pid=11832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.285121 kernel: audit: type=1103 audit(1747446462.181:551): pid=11832 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.285195 kernel: audit: type=1006 audit(1747446462.181:552): pid=11832 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 May 17 01:47:42.181000 audit[11832]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd37ead00 a2=3 a3=1 items=0 ppid=1 pid=11832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:42.366099 kernel: audit: type=1300 audit(1747446462.181:552): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd37ead00 a2=3 a3=1 items=0 ppid=1 pid=11832 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 01:47:42.366184 kernel: audit: type=1327 audit(1747446462.181:552): proctitle=737368643A20636F7265205B707269765D May 17 01:47:42.181000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D May 17 01:47:42.190000 audit[11832]: USER_START pid=11832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.441791 sshd[11832]: pam_unix(sshd:session): session closed for user core May 17 01:47:42.444020 systemd[1]: sshd@23-86.109.9.158:22-147.75.109.163:37446.service: Deactivated successfully. May 17 01:47:42.444244 kernel: audit: type=1105 audit(1747446462.190:553): pid=11832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.444328 kernel: audit: type=1103 audit(1747446462.193:554): pid=11835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.193000 audit[11835]: CRED_ACQ pid=11835 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.445282 systemd-logind[2371]: Session 26 logged out. Waiting for processes to exit. May 17 01:47:42.445370 systemd[1]: session-26.scope: Deactivated successfully. May 17 01:47:42.445946 systemd-logind[2371]: Removed session 26. May 17 01:47:42.441000 audit[11832]: USER_END pid=11832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.543982 kernel: audit: type=1106 audit(1747446462.441:555): pid=11832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.544026 kernel: audit: type=1104 audit(1747446462.441:556): pid=11832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.441000 audit[11832]: CRED_DISP pid=11832 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' May 17 01:47:42.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-86.109.9.158:22-147.75.109.163:37446 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 01:47:43.817649 kubelet[3578]: E0517 01:47:43.817601 3578 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-7cfb685fbd-76hs6" podUID="34828fa1-51fa-4503-8a6b-b736774334d2"