Aug 13 01:20:40.644641 kernel: Linux version 5.15.189-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Tue Aug 12 23:01:50 -00 2025 Aug 13 01:20:40.644655 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:20:40.644662 kernel: Disabled fast string operations Aug 13 01:20:40.644666 kernel: BIOS-provided physical RAM map: Aug 13 01:20:40.644669 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Aug 13 01:20:40.644673 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Aug 13 01:20:40.644679 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Aug 13 01:20:40.644684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Aug 13 01:20:40.644688 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Aug 13 01:20:40.644692 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Aug 13 01:20:40.644696 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Aug 13 01:20:40.644700 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Aug 13 01:20:40.644704 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Aug 13 01:20:40.644708 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Aug 13 01:20:40.644715 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Aug 13 01:20:40.644719 kernel: NX (Execute Disable) protection: active Aug 13 01:20:40.644724 kernel: SMBIOS 2.7 present. Aug 13 01:20:40.644728 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Aug 13 01:20:40.644733 kernel: vmware: hypercall mode: 0x00 Aug 13 01:20:40.644737 kernel: Hypervisor detected: VMware Aug 13 01:20:40.644743 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Aug 13 01:20:40.644747 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Aug 13 01:20:40.644752 kernel: vmware: using clock offset of 2608538003 ns Aug 13 01:20:40.644756 kernel: tsc: Detected 3408.000 MHz processor Aug 13 01:20:40.644761 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:20:40.644766 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:20:40.644771 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Aug 13 01:20:40.644776 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:20:40.644780 kernel: total RAM covered: 3072M Aug 13 01:20:40.644785 kernel: Found optimal setting for mtrr clean up Aug 13 01:20:40.644791 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Aug 13 01:20:40.644796 kernel: Using GB pages for direct mapping Aug 13 01:20:40.644800 kernel: ACPI: Early table checksum verification disabled Aug 13 01:20:40.644832 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Aug 13 01:20:40.644837 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Aug 13 01:20:40.644842 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Aug 13 01:20:40.644847 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Aug 13 01:20:40.644852 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 01:20:40.644870 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Aug 13 01:20:40.644876 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Aug 13 01:20:40.644883 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Aug 13 01:20:40.644896 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Aug 13 01:20:40.644910 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Aug 13 01:20:40.644921 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Aug 13 01:20:40.644936 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Aug 13 01:20:40.644951 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Aug 13 01:20:40.644962 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Aug 13 01:20:40.644973 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 01:20:40.644983 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Aug 13 01:20:40.644995 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Aug 13 01:20:40.645007 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Aug 13 01:20:40.645012 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Aug 13 01:20:40.645017 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Aug 13 01:20:40.645024 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Aug 13 01:20:40.645029 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Aug 13 01:20:40.645034 kernel: system APIC only can use physical flat Aug 13 01:20:40.645040 kernel: Setting APIC routing to physical flat. Aug 13 01:20:40.645045 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 01:20:40.645050 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 01:20:40.645055 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 01:20:40.645060 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 01:20:40.645065 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 01:20:40.645070 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 01:20:40.645075 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 01:20:40.645080 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 01:20:40.645085 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Aug 13 01:20:40.645090 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Aug 13 01:20:40.645095 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Aug 13 01:20:40.645100 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Aug 13 01:20:40.645105 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Aug 13 01:20:40.645110 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Aug 13 01:20:40.645115 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Aug 13 01:20:40.645121 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Aug 13 01:20:40.645126 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Aug 13 01:20:40.645131 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Aug 13 01:20:40.645136 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Aug 13 01:20:40.645140 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Aug 13 01:20:40.645145 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Aug 13 01:20:40.645150 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Aug 13 01:20:40.645155 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Aug 13 01:20:40.645160 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Aug 13 01:20:40.645165 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Aug 13 01:20:40.645171 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Aug 13 01:20:40.645176 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Aug 13 01:20:40.645181 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Aug 13 01:20:40.645185 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Aug 13 01:20:40.645190 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Aug 13 01:20:40.645195 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Aug 13 01:20:40.645200 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Aug 13 01:20:40.645205 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Aug 13 01:20:40.645210 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Aug 13 01:20:40.645215 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Aug 13 01:20:40.645221 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Aug 13 01:20:40.645226 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Aug 13 01:20:40.645231 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Aug 13 01:20:40.645235 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Aug 13 01:20:40.645240 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Aug 13 01:20:40.645245 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Aug 13 01:20:40.645250 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Aug 13 01:20:40.645255 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Aug 13 01:20:40.645260 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Aug 13 01:20:40.645265 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Aug 13 01:20:40.645271 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Aug 13 01:20:40.645276 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Aug 13 01:20:40.645280 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Aug 13 01:20:40.645290 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Aug 13 01:20:40.645303 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Aug 13 01:20:40.645312 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Aug 13 01:20:40.645317 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Aug 13 01:20:40.645329 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Aug 13 01:20:40.645334 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Aug 13 01:20:40.645341 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Aug 13 01:20:40.645346 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Aug 13 01:20:40.645351 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Aug 13 01:20:40.645355 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Aug 13 01:20:40.645360 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Aug 13 01:20:40.645366 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Aug 13 01:20:40.645372 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Aug 13 01:20:40.645379 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Aug 13 01:20:40.645386 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Aug 13 01:20:40.645391 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Aug 13 01:20:40.645396 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Aug 13 01:20:40.645401 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Aug 13 01:20:40.645411 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Aug 13 01:20:40.649378 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Aug 13 01:20:40.649389 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Aug 13 01:20:40.649398 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Aug 13 01:20:40.649422 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Aug 13 01:20:40.649431 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Aug 13 01:20:40.649438 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Aug 13 01:20:40.649449 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Aug 13 01:20:40.649457 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Aug 13 01:20:40.649463 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Aug 13 01:20:40.649473 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Aug 13 01:20:40.649479 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Aug 13 01:20:40.649484 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Aug 13 01:20:40.649489 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Aug 13 01:20:40.649495 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Aug 13 01:20:40.649500 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Aug 13 01:20:40.649507 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Aug 13 01:20:40.649512 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Aug 13 01:20:40.649517 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Aug 13 01:20:40.649523 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Aug 13 01:20:40.649528 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Aug 13 01:20:40.649533 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Aug 13 01:20:40.649539 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Aug 13 01:20:40.649544 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Aug 13 01:20:40.649549 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Aug 13 01:20:40.649554 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Aug 13 01:20:40.649561 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Aug 13 01:20:40.649566 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Aug 13 01:20:40.649571 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Aug 13 01:20:40.649579 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Aug 13 01:20:40.649586 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Aug 13 01:20:40.649591 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Aug 13 01:20:40.649596 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Aug 13 01:20:40.649602 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Aug 13 01:20:40.649607 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Aug 13 01:20:40.649613 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Aug 13 01:20:40.649619 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Aug 13 01:20:40.649625 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Aug 13 01:20:40.649630 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Aug 13 01:20:40.649635 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Aug 13 01:20:40.649640 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Aug 13 01:20:40.649646 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Aug 13 01:20:40.649651 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Aug 13 01:20:40.649656 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Aug 13 01:20:40.649662 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Aug 13 01:20:40.649667 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Aug 13 01:20:40.649673 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Aug 13 01:20:40.649679 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Aug 13 01:20:40.649684 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Aug 13 01:20:40.649689 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Aug 13 01:20:40.649695 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Aug 13 01:20:40.649700 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Aug 13 01:20:40.649705 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Aug 13 01:20:40.649711 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Aug 13 01:20:40.649716 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Aug 13 01:20:40.649722 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Aug 13 01:20:40.649727 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Aug 13 01:20:40.649733 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Aug 13 01:20:40.649738 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Aug 13 01:20:40.649743 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Aug 13 01:20:40.649749 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Aug 13 01:20:40.649754 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Aug 13 01:20:40.649759 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 01:20:40.649765 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 01:20:40.649770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Aug 13 01:20:40.649777 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Aug 13 01:20:40.649782 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Aug 13 01:20:40.649788 kernel: Zone ranges: Aug 13 01:20:40.649794 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:20:40.649799 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Aug 13 01:20:40.649804 kernel: Normal empty Aug 13 01:20:40.649810 kernel: Movable zone start for each node Aug 13 01:20:40.649815 kernel: Early memory node ranges Aug 13 01:20:40.649821 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Aug 13 01:20:40.649827 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Aug 13 01:20:40.649833 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Aug 13 01:20:40.649838 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Aug 13 01:20:40.649844 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:20:40.649849 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Aug 13 01:20:40.649854 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Aug 13 01:20:40.649860 kernel: ACPI: PM-Timer IO Port: 0x1008 Aug 13 01:20:40.649865 kernel: system APIC only can use physical flat Aug 13 01:20:40.649871 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Aug 13 01:20:40.649876 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Aug 13 01:20:40.649882 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Aug 13 01:20:40.649888 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Aug 13 01:20:40.649893 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Aug 13 01:20:40.649899 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Aug 13 01:20:40.649904 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Aug 13 01:20:40.649909 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Aug 13 01:20:40.649915 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Aug 13 01:20:40.649920 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Aug 13 01:20:40.649925 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Aug 13 01:20:40.649932 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Aug 13 01:20:40.649937 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Aug 13 01:20:40.649942 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Aug 13 01:20:40.649948 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Aug 13 01:20:40.649953 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Aug 13 01:20:40.649959 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Aug 13 01:20:40.649964 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Aug 13 01:20:40.649969 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Aug 13 01:20:40.649975 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Aug 13 01:20:40.649980 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Aug 13 01:20:40.649986 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Aug 13 01:20:40.649992 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Aug 13 01:20:40.649997 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Aug 13 01:20:40.650002 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Aug 13 01:20:40.650008 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Aug 13 01:20:40.650013 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Aug 13 01:20:40.650018 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Aug 13 01:20:40.650024 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Aug 13 01:20:40.650029 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Aug 13 01:20:40.650035 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Aug 13 01:20:40.650040 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Aug 13 01:20:40.650046 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Aug 13 01:20:40.650051 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Aug 13 01:20:40.650056 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Aug 13 01:20:40.650062 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Aug 13 01:20:40.650067 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Aug 13 01:20:40.650072 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Aug 13 01:20:40.650078 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Aug 13 01:20:40.650084 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Aug 13 01:20:40.650089 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Aug 13 01:20:40.650095 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Aug 13 01:20:40.650100 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Aug 13 01:20:40.650105 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Aug 13 01:20:40.650111 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Aug 13 01:20:40.650116 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Aug 13 01:20:40.650121 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Aug 13 01:20:40.650127 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Aug 13 01:20:40.650132 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Aug 13 01:20:40.650138 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Aug 13 01:20:40.650143 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Aug 13 01:20:40.650149 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Aug 13 01:20:40.650154 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Aug 13 01:20:40.650160 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Aug 13 01:20:40.650165 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Aug 13 01:20:40.650170 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Aug 13 01:20:40.650176 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Aug 13 01:20:40.650181 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Aug 13 01:20:40.650187 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Aug 13 01:20:40.650192 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Aug 13 01:20:40.650198 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Aug 13 01:20:40.650203 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Aug 13 01:20:40.650208 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Aug 13 01:20:40.650214 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Aug 13 01:20:40.650219 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Aug 13 01:20:40.650224 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Aug 13 01:20:40.650230 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Aug 13 01:20:40.650235 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Aug 13 01:20:40.650242 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Aug 13 01:20:40.650247 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Aug 13 01:20:40.650252 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Aug 13 01:20:40.650258 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Aug 13 01:20:40.650263 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Aug 13 01:20:40.650268 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Aug 13 01:20:40.650274 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Aug 13 01:20:40.650280 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Aug 13 01:20:40.650285 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Aug 13 01:20:40.650291 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Aug 13 01:20:40.650296 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Aug 13 01:20:40.650302 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Aug 13 01:20:40.650307 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Aug 13 01:20:40.650312 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Aug 13 01:20:40.650318 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Aug 13 01:20:40.650323 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Aug 13 01:20:40.650328 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Aug 13 01:20:40.650333 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Aug 13 01:20:40.650339 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Aug 13 01:20:40.650345 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Aug 13 01:20:40.650350 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Aug 13 01:20:40.650356 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Aug 13 01:20:40.650361 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Aug 13 01:20:40.650366 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Aug 13 01:20:40.650372 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Aug 13 01:20:40.650377 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Aug 13 01:20:40.650383 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Aug 13 01:20:40.650388 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Aug 13 01:20:40.650395 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Aug 13 01:20:40.650400 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Aug 13 01:20:40.650405 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Aug 13 01:20:40.650410 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Aug 13 01:20:40.650438 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Aug 13 01:20:40.650444 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Aug 13 01:20:40.650450 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Aug 13 01:20:40.650455 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Aug 13 01:20:40.650460 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Aug 13 01:20:40.650468 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Aug 13 01:20:40.650473 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Aug 13 01:20:40.650478 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Aug 13 01:20:40.650484 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Aug 13 01:20:40.650489 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Aug 13 01:20:40.650495 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Aug 13 01:20:40.650500 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Aug 13 01:20:40.650505 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Aug 13 01:20:40.650511 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Aug 13 01:20:40.650516 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Aug 13 01:20:40.650522 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Aug 13 01:20:40.650528 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Aug 13 01:20:40.650533 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Aug 13 01:20:40.650538 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Aug 13 01:20:40.650544 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Aug 13 01:20:40.650549 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Aug 13 01:20:40.650554 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Aug 13 01:20:40.650560 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Aug 13 01:20:40.650565 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Aug 13 01:20:40.650571 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Aug 13 01:20:40.650577 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Aug 13 01:20:40.650582 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Aug 13 01:20:40.650587 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Aug 13 01:20:40.650593 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:20:40.650598 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Aug 13 01:20:40.650603 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:20:40.650609 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Aug 13 01:20:40.650614 kernel: TSC deadline timer available Aug 13 01:20:40.650620 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Aug 13 01:20:40.650626 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Aug 13 01:20:40.650631 kernel: Booting paravirtualized kernel on VMware hypervisor Aug 13 01:20:40.650637 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:20:40.650642 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Aug 13 01:20:40.650648 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Aug 13 01:20:40.650653 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Aug 13 01:20:40.650659 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Aug 13 01:20:40.650665 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Aug 13 01:20:40.650671 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Aug 13 01:20:40.650676 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Aug 13 01:20:40.650681 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Aug 13 01:20:40.650686 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Aug 13 01:20:40.650692 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Aug 13 01:20:40.650704 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Aug 13 01:20:40.650711 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Aug 13 01:20:40.650717 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Aug 13 01:20:40.650722 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Aug 13 01:20:40.650729 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Aug 13 01:20:40.650735 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Aug 13 01:20:40.650740 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Aug 13 01:20:40.650746 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Aug 13 01:20:40.650752 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Aug 13 01:20:40.650757 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Aug 13 01:20:40.650763 kernel: Policy zone: DMA32 Aug 13 01:20:40.650770 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:20:40.650777 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:20:40.650782 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Aug 13 01:20:40.650788 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Aug 13 01:20:40.650794 kernel: printk: log_buf_len min size: 262144 bytes Aug 13 01:20:40.650800 kernel: printk: log_buf_len: 1048576 bytes Aug 13 01:20:40.650805 kernel: printk: early log buf free: 239728(91%) Aug 13 01:20:40.650812 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:20:40.650817 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 01:20:40.650823 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:20:40.650831 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2276K rwdata, 13732K rodata, 47488K init, 4092K bss, 155976K reserved, 0K cma-reserved) Aug 13 01:20:40.650836 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Aug 13 01:20:40.650842 kernel: ftrace: allocating 34608 entries in 136 pages Aug 13 01:20:40.650848 kernel: ftrace: allocated 136 pages with 2 groups Aug 13 01:20:40.650855 kernel: rcu: Hierarchical RCU implementation. Aug 13 01:20:40.650861 kernel: rcu: RCU event tracing is enabled. Aug 13 01:20:40.650868 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Aug 13 01:20:40.650874 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:20:40.650880 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:20:40.650886 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:20:40.650892 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Aug 13 01:20:40.650898 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Aug 13 01:20:40.650903 kernel: random: crng init done Aug 13 01:20:40.650909 kernel: Console: colour VGA+ 80x25 Aug 13 01:20:40.650916 kernel: printk: console [tty0] enabled Aug 13 01:20:40.650923 kernel: printk: console [ttyS0] enabled Aug 13 01:20:40.650935 kernel: ACPI: Core revision 20210730 Aug 13 01:20:40.650942 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Aug 13 01:20:40.650953 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:20:40.650959 kernel: x2apic enabled Aug 13 01:20:40.650967 kernel: Switched APIC routing to physical x2apic. Aug 13 01:20:40.650973 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:20:40.650979 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 01:20:40.650985 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Aug 13 01:20:40.650992 kernel: Disabled fast string operations Aug 13 01:20:40.650998 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 13 01:20:40.651004 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Aug 13 01:20:40.651010 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:20:40.651016 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Aug 13 01:20:40.651022 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Aug 13 01:20:40.651028 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Aug 13 01:20:40.651034 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Aug 13 01:20:40.651040 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Aug 13 01:20:40.651047 kernel: RETBleed: Mitigation: Enhanced IBRS Aug 13 01:20:40.651053 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:20:40.651059 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Aug 13 01:20:40.651065 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:20:40.651073 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 01:20:40.651079 kernel: GDS: Unknown: Dependent on hypervisor status Aug 13 01:20:40.651085 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 01:20:40.651091 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:20:40.651097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:20:40.651103 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:20:40.651109 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:20:40.651115 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Aug 13 01:20:40.651121 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:20:40.651127 kernel: pid_max: default: 131072 minimum: 1024 Aug 13 01:20:40.651133 kernel: LSM: Security Framework initializing Aug 13 01:20:40.651139 kernel: SELinux: Initializing. Aug 13 01:20:40.651144 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 01:20:40.651150 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 01:20:40.651157 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Aug 13 01:20:40.651163 kernel: Performance Events: Skylake events, core PMU driver. Aug 13 01:20:40.651169 kernel: core: CPUID marked event: 'cpu cycles' unavailable Aug 13 01:20:40.651175 kernel: core: CPUID marked event: 'instructions' unavailable Aug 13 01:20:40.651180 kernel: core: CPUID marked event: 'bus cycles' unavailable Aug 13 01:20:40.651186 kernel: core: CPUID marked event: 'cache references' unavailable Aug 13 01:20:40.651192 kernel: core: CPUID marked event: 'cache misses' unavailable Aug 13 01:20:40.651197 kernel: core: CPUID marked event: 'branch instructions' unavailable Aug 13 01:20:40.651204 kernel: core: CPUID marked event: 'branch misses' unavailable Aug 13 01:20:40.651210 kernel: ... version: 1 Aug 13 01:20:40.651215 kernel: ... bit width: 48 Aug 13 01:20:40.651221 kernel: ... generic registers: 4 Aug 13 01:20:40.651227 kernel: ... value mask: 0000ffffffffffff Aug 13 01:20:40.651233 kernel: ... max period: 000000007fffffff Aug 13 01:20:40.651239 kernel: ... fixed-purpose events: 0 Aug 13 01:20:40.651244 kernel: ... event mask: 000000000000000f Aug 13 01:20:40.651250 kernel: signal: max sigframe size: 1776 Aug 13 01:20:40.651257 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:20:40.651263 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 01:20:40.651268 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:20:40.651274 kernel: x86: Booting SMP configuration: Aug 13 01:20:40.651280 kernel: .... node #0, CPUs: #1 Aug 13 01:20:40.651286 kernel: Disabled fast string operations Aug 13 01:20:40.651292 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Aug 13 01:20:40.651298 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 01:20:40.651304 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:20:40.651310 kernel: smpboot: Max logical packages: 128 Aug 13 01:20:40.651316 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Aug 13 01:20:40.651322 kernel: devtmpfs: initialized Aug 13 01:20:40.651328 kernel: x86/mm: Memory block size: 128MB Aug 13 01:20:40.651334 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Aug 13 01:20:40.651340 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:20:40.651346 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Aug 13 01:20:40.651352 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:20:40.651358 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:20:40.651363 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:20:40.651370 kernel: audit: type=2000 audit(1755048039.083:1): state=initialized audit_enabled=0 res=1 Aug 13 01:20:40.651376 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:20:40.651382 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:20:40.651387 kernel: cpuidle: using governor menu Aug 13 01:20:40.651393 kernel: Simple Boot Flag at 0x36 set to 0x80 Aug 13 01:20:40.651399 kernel: ACPI: bus type PCI registered Aug 13 01:20:40.651405 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:20:40.651411 kernel: dca service started, version 1.12.1 Aug 13 01:20:40.653812 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Aug 13 01:20:40.653823 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Aug 13 01:20:40.653829 kernel: PCI: Using configuration type 1 for base access Aug 13 01:20:40.653835 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:20:40.653841 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:20:40.653847 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:20:40.653853 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:20:40.653859 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:20:40.653865 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:20:40.653870 kernel: ACPI: Added _OSI(Linux-Dell-Video) Aug 13 01:20:40.653877 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Aug 13 01:20:40.653883 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Aug 13 01:20:40.653889 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:20:40.653895 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Aug 13 01:20:40.653901 kernel: ACPI: Interpreter enabled Aug 13 01:20:40.653907 kernel: ACPI: PM: (supports S0 S1 S5) Aug 13 01:20:40.653912 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:20:40.653918 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:20:40.653924 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Aug 13 01:20:40.653931 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Aug 13 01:20:40.654002 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:20:40.654052 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Aug 13 01:20:40.654098 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Aug 13 01:20:40.654105 kernel: PCI host bridge to bus 0000:00 Aug 13 01:20:40.654153 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:20:40.654195 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Aug 13 01:20:40.654238 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:20:40.654278 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:20:40.654319 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Aug 13 01:20:40.654359 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Aug 13 01:20:40.654413 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Aug 13 01:20:40.654487 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Aug 13 01:20:40.654542 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Aug 13 01:20:40.654595 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Aug 13 01:20:40.654643 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Aug 13 01:20:40.654690 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Aug 13 01:20:40.654736 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Aug 13 01:20:40.654782 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Aug 13 01:20:40.654881 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Aug 13 01:20:40.654933 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Aug 13 01:20:40.654979 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Aug 13 01:20:40.655025 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Aug 13 01:20:40.655074 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Aug 13 01:20:40.655121 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Aug 13 01:20:40.655167 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Aug 13 01:20:40.655218 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Aug 13 01:20:40.655265 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Aug 13 01:20:40.655310 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Aug 13 01:20:40.655356 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Aug 13 01:20:40.655402 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Aug 13 01:20:40.655455 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:20:40.655505 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Aug 13 01:20:40.655559 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.655605 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.655655 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.655704 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.655754 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.655804 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.655859 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.655906 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.655956 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656002 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656051 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656097 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656149 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656195 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656245 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656291 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656341 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656387 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656448 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656495 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656545 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656591 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656642 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656689 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656741 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656787 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656837 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656883 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.656932 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.656978 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.657031 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.657077 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.657126 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.657172 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.657221 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.657266 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.657318 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.657365 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.658732 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.658813 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.658883 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.658931 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.658984 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659031 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659080 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659127 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659178 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659224 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659274 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659322 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659372 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659429 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659483 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659530 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659580 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659629 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659681 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659728 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659778 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659825 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659874 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.659922 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.659972 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Aug 13 01:20:40.660019 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.660068 kernel: pci_bus 0000:01: extended config space not accessible Aug 13 01:20:40.660116 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 01:20:40.660164 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 01:20:40.660175 kernel: acpiphp: Slot [32] registered Aug 13 01:20:40.660181 kernel: acpiphp: Slot [33] registered Aug 13 01:20:40.660187 kernel: acpiphp: Slot [34] registered Aug 13 01:20:40.660193 kernel: acpiphp: Slot [35] registered Aug 13 01:20:40.660199 kernel: acpiphp: Slot [36] registered Aug 13 01:20:40.660205 kernel: acpiphp: Slot [37] registered Aug 13 01:20:40.660211 kernel: acpiphp: Slot [38] registered Aug 13 01:20:40.660217 kernel: acpiphp: Slot [39] registered Aug 13 01:20:40.660222 kernel: acpiphp: Slot [40] registered Aug 13 01:20:40.660228 kernel: acpiphp: Slot [41] registered Aug 13 01:20:40.660235 kernel: acpiphp: Slot [42] registered Aug 13 01:20:40.660241 kernel: acpiphp: Slot [43] registered Aug 13 01:20:40.660246 kernel: acpiphp: Slot [44] registered Aug 13 01:20:40.660252 kernel: acpiphp: Slot [45] registered Aug 13 01:20:40.660258 kernel: acpiphp: Slot [46] registered Aug 13 01:20:40.660264 kernel: acpiphp: Slot [47] registered Aug 13 01:20:40.660269 kernel: acpiphp: Slot [48] registered Aug 13 01:20:40.660275 kernel: acpiphp: Slot [49] registered Aug 13 01:20:40.660281 kernel: acpiphp: Slot [50] registered Aug 13 01:20:40.660288 kernel: acpiphp: Slot [51] registered Aug 13 01:20:40.660294 kernel: acpiphp: Slot [52] registered Aug 13 01:20:40.660299 kernel: acpiphp: Slot [53] registered Aug 13 01:20:40.660305 kernel: acpiphp: Slot [54] registered Aug 13 01:20:40.660311 kernel: acpiphp: Slot [55] registered Aug 13 01:20:40.660317 kernel: acpiphp: Slot [56] registered Aug 13 01:20:40.660322 kernel: acpiphp: Slot [57] registered Aug 13 01:20:40.660328 kernel: acpiphp: Slot [58] registered Aug 13 01:20:40.660334 kernel: acpiphp: Slot [59] registered Aug 13 01:20:40.660340 kernel: acpiphp: Slot [60] registered Aug 13 01:20:40.660346 kernel: acpiphp: Slot [61] registered Aug 13 01:20:40.660352 kernel: acpiphp: Slot [62] registered Aug 13 01:20:40.660358 kernel: acpiphp: Slot [63] registered Aug 13 01:20:40.660404 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Aug 13 01:20:40.660458 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 01:20:40.660503 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 01:20:40.660549 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 01:20:40.660594 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Aug 13 01:20:40.660643 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Aug 13 01:20:40.660689 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Aug 13 01:20:40.660734 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Aug 13 01:20:40.660780 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Aug 13 01:20:40.660842 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Aug 13 01:20:40.660891 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Aug 13 01:20:40.660939 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Aug 13 01:20:40.660988 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 01:20:40.661035 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Aug 13 01:20:40.661082 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 01:20:40.661129 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 01:20:40.661175 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 01:20:40.661222 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 01:20:40.661268 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 01:20:40.661315 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 01:20:40.661363 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 01:20:40.661409 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 01:20:40.661463 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 01:20:40.661510 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 01:20:40.661556 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 01:20:40.661602 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 01:20:40.661648 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 01:20:40.661697 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 01:20:40.661743 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 01:20:40.661788 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 01:20:40.661834 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 01:20:40.661879 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 01:20:40.661928 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 01:20:40.661975 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 01:20:40.662021 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 01:20:40.662067 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 01:20:40.662113 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 01:20:40.662158 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 01:20:40.662205 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 01:20:40.662252 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 01:20:40.662299 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 01:20:40.662351 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Aug 13 01:20:40.662400 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Aug 13 01:20:40.662619 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Aug 13 01:20:40.662671 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Aug 13 01:20:40.662718 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Aug 13 01:20:40.662766 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Aug 13 01:20:40.662822 kernel: pci 0000:0b:00.0: supports D1 D2 Aug 13 01:20:40.662870 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Aug 13 01:20:40.662918 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Aug 13 01:20:40.662964 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 01:20:40.663010 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 01:20:40.663056 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 01:20:40.663102 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 01:20:40.663148 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 01:20:40.663197 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 01:20:40.663243 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 01:20:40.663289 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 01:20:40.663336 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 01:20:40.663381 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 01:20:40.663438 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 01:20:40.663486 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 01:20:40.663534 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 01:20:40.663580 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 01:20:40.663627 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 01:20:40.663674 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 01:20:40.663719 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 01:20:40.663766 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 01:20:40.665938 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 01:20:40.665991 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 01:20:40.666042 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 01:20:40.666089 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 01:20:40.666135 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 01:20:40.666183 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 01:20:40.666229 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 01:20:40.666275 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 01:20:40.666321 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 01:20:40.666367 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 01:20:40.666420 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 01:20:40.666468 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 01:20:40.666516 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 01:20:40.666561 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 01:20:40.666607 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 01:20:40.666652 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 01:20:40.666699 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 01:20:40.666745 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 01:20:40.666794 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 01:20:40.666840 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 01:20:40.666887 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 01:20:40.666932 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 01:20:40.666979 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 01:20:40.667026 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 01:20:40.667072 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 01:20:40.667120 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 01:20:40.667166 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 01:20:40.667212 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 01:20:40.667258 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 01:20:40.667304 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 01:20:40.667349 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 01:20:40.667394 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 01:20:40.667447 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 01:20:40.667495 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 01:20:40.667541 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 01:20:40.667587 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 01:20:40.667632 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 01:20:40.667677 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 01:20:40.667723 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 01:20:40.667770 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 01:20:40.667826 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 01:20:40.667876 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 01:20:40.668014 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 01:20:40.668063 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 01:20:40.668131 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 01:20:40.668386 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 01:20:40.668482 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 01:20:40.668532 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 01:20:40.668581 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 01:20:40.668631 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 01:20:40.668677 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 01:20:40.668722 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 01:20:40.668768 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 01:20:40.668814 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 01:20:40.668860 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 01:20:40.668906 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 01:20:40.668951 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 01:20:40.668999 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 01:20:40.669045 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 01:20:40.669091 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 01:20:40.669136 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 01:20:40.669144 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Aug 13 01:20:40.669150 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Aug 13 01:20:40.669157 kernel: ACPI: PCI: Interrupt link LNKB disabled Aug 13 01:20:40.669162 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:20:40.669168 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Aug 13 01:20:40.669176 kernel: iommu: Default domain type: Translated Aug 13 01:20:40.669182 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:20:40.669226 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Aug 13 01:20:40.669272 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:20:40.669317 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Aug 13 01:20:40.669325 kernel: vgaarb: loaded Aug 13 01:20:40.669331 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 13 01:20:40.669338 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 13 01:20:40.669345 kernel: PTP clock support registered Aug 13 01:20:40.669351 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:20:40.669357 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:20:40.669363 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Aug 13 01:20:40.669369 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Aug 13 01:20:40.669375 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Aug 13 01:20:40.669545 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Aug 13 01:20:40.669553 kernel: clocksource: Switched to clocksource tsc-early Aug 13 01:20:40.669559 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:20:40.669566 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:20:40.669573 kernel: pnp: PnP ACPI init Aug 13 01:20:40.669628 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Aug 13 01:20:40.669673 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Aug 13 01:20:40.669990 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Aug 13 01:20:40.670042 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Aug 13 01:20:40.670090 kernel: pnp 00:06: [dma 2] Aug 13 01:20:40.670138 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Aug 13 01:20:40.670181 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Aug 13 01:20:40.670223 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Aug 13 01:20:40.670231 kernel: pnp: PnP ACPI: found 8 devices Aug 13 01:20:40.670238 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:20:40.670244 kernel: NET: Registered PF_INET protocol family Aug 13 01:20:40.670250 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:20:40.670256 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 01:20:40.670264 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:20:40.670270 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 01:20:40.670276 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Aug 13 01:20:40.670282 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 01:20:40.670288 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 01:20:40.670294 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 01:20:40.670300 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:20:40.670306 kernel: NET: Registered PF_XDP protocol family Aug 13 01:20:40.670355 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Aug 13 01:20:40.670407 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 01:20:40.670471 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 01:20:40.670520 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 01:20:40.670567 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 01:20:40.670760 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Aug 13 01:20:40.670825 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Aug 13 01:20:40.670874 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Aug 13 01:20:40.670921 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Aug 13 01:20:40.670968 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Aug 13 01:20:40.671014 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Aug 13 01:20:40.671061 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Aug 13 01:20:40.671110 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Aug 13 01:20:40.671157 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Aug 13 01:20:40.671203 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Aug 13 01:20:40.671249 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Aug 13 01:20:40.671295 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Aug 13 01:20:40.671341 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Aug 13 01:20:40.671389 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Aug 13 01:20:40.671518 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Aug 13 01:20:40.671566 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Aug 13 01:20:40.671612 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Aug 13 01:20:40.671658 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Aug 13 01:20:40.671983 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 01:20:40.672038 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 01:20:40.672086 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676039 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676100 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676151 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676199 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676245 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676292 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676342 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676388 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676442 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676489 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676533 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676580 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676625 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676671 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676719 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676765 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676811 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676857 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676902 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.676948 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.676993 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677039 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677087 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677132 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677178 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677223 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677269 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677314 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677360 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677406 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677467 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677514 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677559 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677606 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677651 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677697 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677743 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677789 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677837 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677883 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.677928 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.677975 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678020 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678066 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678111 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678157 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678202 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678250 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678296 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678341 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678388 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678507 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678556 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678602 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678647 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678693 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678738 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678791 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678837 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678882 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.678928 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.678974 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.679019 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.679065 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.679111 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.679156 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.679204 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.679429 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.679478 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.679525 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.679571 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.679889 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.679943 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.679991 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.680038 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.680084 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.680133 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.680179 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.680225 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.680270 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.680315 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.680362 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Aug 13 01:20:40.680407 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Aug 13 01:20:40.680499 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Aug 13 01:20:40.680545 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Aug 13 01:20:40.680593 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Aug 13 01:20:40.680787 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Aug 13 01:20:40.680836 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 01:20:40.680887 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Aug 13 01:20:40.680934 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Aug 13 01:20:40.680980 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Aug 13 01:20:40.681026 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Aug 13 01:20:40.681072 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 01:20:40.681121 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Aug 13 01:20:40.681167 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Aug 13 01:20:40.681212 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Aug 13 01:20:40.681258 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 01:20:40.681305 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Aug 13 01:20:40.681350 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Aug 13 01:20:40.681397 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Aug 13 01:20:40.681455 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 01:20:40.681502 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Aug 13 01:20:40.681551 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Aug 13 01:20:40.681596 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 01:20:40.681642 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Aug 13 01:20:40.681896 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Aug 13 01:20:40.681947 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 01:20:40.681998 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Aug 13 01:20:40.682055 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Aug 13 01:20:40.682104 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 01:20:40.682150 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Aug 13 01:20:40.682196 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Aug 13 01:20:40.682243 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 01:20:40.682288 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Aug 13 01:20:40.682334 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Aug 13 01:20:40.682379 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 01:20:40.682690 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Aug 13 01:20:40.682748 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Aug 13 01:20:40.682806 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Aug 13 01:20:40.682856 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Aug 13 01:20:40.682903 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 01:20:40.682949 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Aug 13 01:20:40.682996 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Aug 13 01:20:40.683042 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Aug 13 01:20:40.683087 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 01:20:40.683133 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Aug 13 01:20:40.683180 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Aug 13 01:20:40.683228 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Aug 13 01:20:40.683274 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 01:20:40.683320 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Aug 13 01:20:40.683367 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Aug 13 01:20:40.683412 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 01:20:40.683473 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Aug 13 01:20:40.683520 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Aug 13 01:20:40.683567 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 01:20:40.683613 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Aug 13 01:20:40.683661 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Aug 13 01:20:40.683708 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 01:20:40.683754 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Aug 13 01:20:40.683800 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Aug 13 01:20:40.683845 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 01:20:40.683892 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Aug 13 01:20:40.683937 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Aug 13 01:20:40.683984 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 01:20:40.684030 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Aug 13 01:20:40.684077 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Aug 13 01:20:40.684125 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Aug 13 01:20:40.684170 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 01:20:40.684216 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Aug 13 01:20:40.684263 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Aug 13 01:20:40.684308 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Aug 13 01:20:40.684355 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 01:20:40.684401 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Aug 13 01:20:40.684454 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Aug 13 01:20:40.684501 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Aug 13 01:20:40.684548 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 01:20:40.684594 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Aug 13 01:20:40.684640 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Aug 13 01:20:40.684685 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 01:20:40.684731 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Aug 13 01:20:40.684776 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Aug 13 01:20:40.684865 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 01:20:40.684926 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Aug 13 01:20:40.684971 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Aug 13 01:20:40.685017 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 01:20:40.685065 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Aug 13 01:20:40.685112 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Aug 13 01:20:40.685158 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 01:20:40.685204 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Aug 13 01:20:40.685250 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Aug 13 01:20:40.685296 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 01:20:40.685341 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Aug 13 01:20:40.685388 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Aug 13 01:20:40.685439 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Aug 13 01:20:40.685488 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 01:20:40.685814 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Aug 13 01:20:40.685869 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Aug 13 01:20:40.686129 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Aug 13 01:20:40.686184 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 01:20:40.686232 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Aug 13 01:20:40.686282 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Aug 13 01:20:40.686330 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 01:20:40.686377 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Aug 13 01:20:40.686431 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Aug 13 01:20:40.686481 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 01:20:40.686584 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Aug 13 01:20:40.686649 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Aug 13 01:20:40.686697 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 01:20:40.686744 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Aug 13 01:20:40.686791 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Aug 13 01:20:40.686836 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 01:20:40.686883 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Aug 13 01:20:40.686929 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Aug 13 01:20:40.686978 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 01:20:40.687024 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Aug 13 01:20:40.687069 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Aug 13 01:20:40.687115 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 01:20:40.687160 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 01:20:40.687202 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 01:20:40.687243 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 01:20:40.687347 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Aug 13 01:20:40.687394 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Aug 13 01:20:40.687448 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Aug 13 01:20:40.687493 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Aug 13 01:20:40.687535 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Aug 13 01:20:40.687577 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Aug 13 01:20:40.687620 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Aug 13 01:20:40.687663 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Aug 13 01:20:40.687705 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Aug 13 01:20:40.687751 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Aug 13 01:20:40.687798 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Aug 13 01:20:40.687841 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Aug 13 01:20:40.687883 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Aug 13 01:20:40.687932 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Aug 13 01:20:40.687975 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Aug 13 01:20:40.688019 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Aug 13 01:20:40.688068 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Aug 13 01:20:40.688112 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Aug 13 01:20:40.688154 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Aug 13 01:20:40.688200 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Aug 13 01:20:40.688243 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Aug 13 01:20:40.688514 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Aug 13 01:20:40.688572 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Aug 13 01:20:40.688627 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Aug 13 01:20:40.688695 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Aug 13 01:20:40.688949 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Aug 13 01:20:40.688999 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Aug 13 01:20:40.689049 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Aug 13 01:20:40.689093 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Aug 13 01:20:40.689144 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Aug 13 01:20:40.689187 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Aug 13 01:20:40.689229 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Aug 13 01:20:40.689276 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Aug 13 01:20:40.689319 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Aug 13 01:20:40.689365 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Aug 13 01:20:40.689491 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Aug 13 01:20:40.689545 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Aug 13 01:20:40.689589 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Aug 13 01:20:40.689636 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Aug 13 01:20:40.689679 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Aug 13 01:20:40.689726 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Aug 13 01:20:40.689772 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Aug 13 01:20:40.689821 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Aug 13 01:20:40.689864 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Aug 13 01:20:40.689911 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Aug 13 01:20:40.689954 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Aug 13 01:20:40.690000 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Aug 13 01:20:40.690045 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Aug 13 01:20:40.690091 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Aug 13 01:20:40.690134 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Aug 13 01:20:40.690176 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Aug 13 01:20:40.690223 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Aug 13 01:20:40.690266 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Aug 13 01:20:40.690308 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Aug 13 01:20:40.690355 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Aug 13 01:20:40.690398 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Aug 13 01:20:40.690468 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Aug 13 01:20:40.690726 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Aug 13 01:20:40.690774 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Aug 13 01:20:40.690829 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Aug 13 01:20:40.690895 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Aug 13 01:20:40.691151 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Aug 13 01:20:40.691200 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Aug 13 01:20:40.691557 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Aug 13 01:20:40.691608 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Aug 13 01:20:40.691656 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Aug 13 01:20:40.691700 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Aug 13 01:20:40.691750 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Aug 13 01:20:40.691793 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Aug 13 01:20:40.691836 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Aug 13 01:20:40.691882 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Aug 13 01:20:40.691925 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Aug 13 01:20:40.691967 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Aug 13 01:20:40.692018 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Aug 13 01:20:40.692061 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Aug 13 01:20:40.692109 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Aug 13 01:20:40.692152 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Aug 13 01:20:40.692198 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Aug 13 01:20:40.692242 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Aug 13 01:20:40.692291 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Aug 13 01:20:40.692335 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Aug 13 01:20:40.692384 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Aug 13 01:20:40.692667 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Aug 13 01:20:40.692720 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Aug 13 01:20:40.692766 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Aug 13 01:20:40.692838 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Aug 13 01:20:40.692850 kernel: PCI: CLS 32 bytes, default 64 Aug 13 01:20:40.692857 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 01:20:40.692864 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Aug 13 01:20:40.692871 kernel: clocksource: Switched to clocksource tsc Aug 13 01:20:40.692877 kernel: Initialise system trusted keyrings Aug 13 01:20:40.693073 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 01:20:40.693083 kernel: Key type asymmetric registered Aug 13 01:20:40.693092 kernel: Asymmetric key parser 'x509' registered Aug 13 01:20:40.693098 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 13 01:20:40.693105 kernel: io scheduler mq-deadline registered Aug 13 01:20:40.693111 kernel: io scheduler kyber registered Aug 13 01:20:40.693117 kernel: io scheduler bfq registered Aug 13 01:20:40.693174 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Aug 13 01:20:40.693224 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693277 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Aug 13 01:20:40.693324 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693373 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Aug 13 01:20:40.693430 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693487 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Aug 13 01:20:40.693534 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693580 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Aug 13 01:20:40.693626 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693675 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Aug 13 01:20:40.693722 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693769 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Aug 13 01:20:40.693815 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693862 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Aug 13 01:20:40.693910 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.693956 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Aug 13 01:20:40.694003 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.694049 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Aug 13 01:20:40.694095 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.694142 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Aug 13 01:20:40.694188 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.694236 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Aug 13 01:20:40.694283 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.694329 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Aug 13 01:20:40.694374 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.694692 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Aug 13 01:20:40.694752 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.694802 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Aug 13 01:20:40.694895 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.695160 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Aug 13 01:20:40.695221 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.695271 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Aug 13 01:20:40.695320 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.695367 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Aug 13 01:20:40.695420 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.695475 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Aug 13 01:20:40.695521 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.695568 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Aug 13 01:20:40.695616 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.695665 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Aug 13 01:20:40.695712 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.695986 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Aug 13 01:20:40.696037 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.696087 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Aug 13 01:20:40.696383 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.696479 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Aug 13 01:20:40.696531 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.696581 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Aug 13 01:20:40.696888 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.696945 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Aug 13 01:20:40.696997 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.697045 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Aug 13 01:20:40.697093 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.697139 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Aug 13 01:20:40.697404 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.697466 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Aug 13 01:20:40.697518 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.697566 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Aug 13 01:20:40.697619 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.697667 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Aug 13 01:20:40.697714 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.697762 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Aug 13 01:20:40.697808 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Aug 13 01:20:40.697817 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:20:40.697824 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:20:40.697830 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:20:40.697837 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Aug 13 01:20:40.697843 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:20:40.697852 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:20:40.697901 kernel: rtc_cmos 00:01: registered as rtc0 Aug 13 01:20:40.697945 kernel: rtc_cmos 00:01: setting system clock to 2025-08-13T01:20:40 UTC (1755048040) Aug 13 01:20:40.697987 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Aug 13 01:20:40.697995 kernel: intel_pstate: CPU model not supported Aug 13 01:20:40.698002 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:20:40.698008 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:20:40.698015 kernel: Segment Routing with IPv6 Aug 13 01:20:40.698022 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:20:40.698028 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:20:40.698035 kernel: Key type dns_resolver registered Aug 13 01:20:40.698041 kernel: IPI shorthand broadcast: enabled Aug 13 01:20:40.698047 kernel: sched_clock: Marking stable (834224804, 212726567)->(1106154872, -59203501) Aug 13 01:20:40.698054 kernel: registered taskstats version 1 Aug 13 01:20:40.698060 kernel: Loading compiled-in X.509 certificates Aug 13 01:20:40.698066 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.189-flatcar: 1d5a64b5798e654719a8bd91d683e7e9894bd433' Aug 13 01:20:40.698072 kernel: Key type .fscrypt registered Aug 13 01:20:40.698079 kernel: Key type fscrypt-provisioning registered Aug 13 01:20:40.698085 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:20:40.698092 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:20:40.698098 kernel: ima: No architecture policies found Aug 13 01:20:40.698105 kernel: clk: Disabling unused clocks Aug 13 01:20:40.698111 kernel: Freeing unused kernel image (initmem) memory: 47488K Aug 13 01:20:40.698117 kernel: Write protecting the kernel read-only data: 28672k Aug 13 01:20:40.698123 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Aug 13 01:20:40.698130 kernel: Freeing unused kernel image (rodata/data gap) memory: 604K Aug 13 01:20:40.698137 kernel: Run /init as init process Aug 13 01:20:40.698144 kernel: with arguments: Aug 13 01:20:40.698150 kernel: /init Aug 13 01:20:40.698156 kernel: with environment: Aug 13 01:20:40.698162 kernel: HOME=/ Aug 13 01:20:40.698168 kernel: TERM=linux Aug 13 01:20:40.698174 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:20:40.698182 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:20:40.698190 systemd[1]: Detected virtualization vmware. Aug 13 01:20:40.698197 systemd[1]: Detected architecture x86-64. Aug 13 01:20:40.698204 systemd[1]: Running in initrd. Aug 13 01:20:40.698211 systemd[1]: No hostname configured, using default hostname. Aug 13 01:20:40.698217 systemd[1]: Hostname set to . Aug 13 01:20:40.698224 systemd[1]: Initializing machine ID from random generator. Aug 13 01:20:40.698230 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:20:40.698236 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:20:40.698243 systemd[1]: Reached target cryptsetup.target. Aug 13 01:20:40.698250 systemd[1]: Reached target paths.target. Aug 13 01:20:40.698256 systemd[1]: Reached target slices.target. Aug 13 01:20:40.698262 systemd[1]: Reached target swap.target. Aug 13 01:20:40.698268 systemd[1]: Reached target timers.target. Aug 13 01:20:40.698275 systemd[1]: Listening on iscsid.socket. Aug 13 01:20:40.698281 systemd[1]: Listening on iscsiuio.socket. Aug 13 01:20:40.698287 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:20:40.698295 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:20:40.698301 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:20:40.698307 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:20:40.698314 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:20:40.698320 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:20:40.698326 systemd[1]: Reached target sockets.target. Aug 13 01:20:40.698333 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:20:40.698339 systemd[1]: Finished network-cleanup.service. Aug 13 01:20:40.698345 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:20:40.698353 systemd[1]: Starting systemd-journald.service... Aug 13 01:20:40.698359 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:20:40.698366 systemd[1]: Starting systemd-resolved.service... Aug 13 01:20:40.698372 systemd[1]: Starting systemd-vconsole-setup.service... Aug 13 01:20:40.698378 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:20:40.698385 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:20:40.698391 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:20:40.698397 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:20:40.698404 kernel: audit: type=1130 audit(1755048040.644:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.698411 systemd[1]: Finished systemd-vconsole-setup.service. Aug 13 01:20:40.698425 kernel: audit: type=1130 audit(1755048040.650:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.698432 systemd[1]: Starting dracut-cmdline-ask.service... Aug 13 01:20:40.698439 systemd[1]: Started systemd-resolved.service. Aug 13 01:20:40.698445 systemd[1]: Reached target nss-lookup.target. Aug 13 01:20:40.698451 kernel: audit: type=1130 audit(1755048040.663:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.698458 systemd[1]: Finished dracut-cmdline-ask.service. Aug 13 01:20:40.698464 kernel: audit: type=1130 audit(1755048040.671:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.698472 systemd[1]: Starting dracut-cmdline.service... Aug 13 01:20:40.698479 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:20:40.698485 kernel: Bridge firewalling registered Aug 13 01:20:40.698494 systemd-journald[216]: Journal started Aug 13 01:20:40.698525 systemd-journald[216]: Runtime Journal (/run/log/journal/ad440663e8ff4bcbb49c2aaaeb3b5d6a) is 4.8M, max 38.8M, 34.0M free. Aug 13 01:20:40.699577 systemd[1]: Started systemd-journald.service. Aug 13 01:20:40.699592 kernel: audit: type=1130 audit(1755048040.697:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.645574 systemd-modules-load[217]: Inserted module 'overlay' Aug 13 01:20:40.660469 systemd-resolved[218]: Positive Trust Anchors: Aug 13 01:20:40.660474 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:20:40.660493 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:20:40.662073 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 13 01:20:40.695759 systemd-modules-load[217]: Inserted module 'br_netfilter' Aug 13 01:20:40.704399 dracut-cmdline[232]: dracut-dracut-053 Aug 13 01:20:40.704399 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Aug 13 01:20:40.704399 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=8f8aacd9fbcdd713563d390e899e90bedf5577e4b1b261b4e57687d87edd6b57 Aug 13 01:20:40.716433 kernel: SCSI subsystem initialized Aug 13 01:20:40.722112 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:20:40.722128 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:20:40.722136 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Aug 13 01:20:40.724099 systemd-modules-load[217]: Inserted module 'dm_multipath' Aug 13 01:20:40.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.724539 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:20:40.727214 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:20:40.727426 kernel: audit: type=1130 audit(1755048040.723:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.730501 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:20:40.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.733427 kernel: audit: type=1130 audit(1755048040.729:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.740435 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:20:40.750427 kernel: iscsi: registered transport (tcp) Aug 13 01:20:40.766430 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:20:40.766445 kernel: QLogic iSCSI HBA Driver Aug 13 01:20:40.781783 systemd[1]: Finished dracut-cmdline.service. Aug 13 01:20:40.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.782330 systemd[1]: Starting dracut-pre-udev.service... Aug 13 01:20:40.785480 kernel: audit: type=1130 audit(1755048040.780:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:40.818462 kernel: raid6: avx2x4 gen() 51216 MB/s Aug 13 01:20:40.835457 kernel: raid6: avx2x4 xor() 23091 MB/s Aug 13 01:20:40.852462 kernel: raid6: avx2x2 gen() 56405 MB/s Aug 13 01:20:40.869452 kernel: raid6: avx2x2 xor() 32874 MB/s Aug 13 01:20:40.886452 kernel: raid6: avx2x1 gen() 45881 MB/s Aug 13 01:20:40.903425 kernel: raid6: avx2x1 xor() 28667 MB/s Aug 13 01:20:40.920455 kernel: raid6: sse2x4 gen() 21338 MB/s Aug 13 01:20:40.937456 kernel: raid6: sse2x4 xor() 12480 MB/s Aug 13 01:20:40.954459 kernel: raid6: sse2x2 gen() 22788 MB/s Aug 13 01:20:40.971459 kernel: raid6: sse2x2 xor() 14018 MB/s Aug 13 01:20:40.988456 kernel: raid6: sse2x1 gen() 19023 MB/s Aug 13 01:20:41.005623 kernel: raid6: sse2x1 xor() 9270 MB/s Aug 13 01:20:41.005640 kernel: raid6: using algorithm avx2x2 gen() 56405 MB/s Aug 13 01:20:41.005650 kernel: raid6: .... xor() 32874 MB/s, rmw enabled Aug 13 01:20:41.006771 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:20:41.015426 kernel: xor: automatically using best checksumming function avx Aug 13 01:20:41.073431 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Aug 13 01:20:41.077327 systemd[1]: Finished dracut-pre-udev.service. Aug 13 01:20:41.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:41.078000 audit: BPF prog-id=7 op=LOAD Aug 13 01:20:41.078000 audit: BPF prog-id=8 op=LOAD Aug 13 01:20:41.080429 kernel: audit: type=1130 audit(1755048041.076:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:41.080148 systemd[1]: Starting systemd-udevd.service... Aug 13 01:20:41.087681 systemd-udevd[415]: Using default interface naming scheme 'v252'. Aug 13 01:20:41.090226 systemd[1]: Started systemd-udevd.service. Aug 13 01:20:41.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:41.092353 systemd[1]: Starting dracut-pre-trigger.service... Aug 13 01:20:41.098572 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Aug 13 01:20:41.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:41.113277 systemd[1]: Finished dracut-pre-trigger.service. Aug 13 01:20:41.113752 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:20:41.172419 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:20:41.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:41.228427 kernel: VMware PVSCSI driver - version 1.0.7.0-k Aug 13 01:20:41.245424 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Aug 13 01:20:41.252379 kernel: vmw_pvscsi: using 64bit dma Aug 13 01:20:41.252397 kernel: vmw_pvscsi: max_id: 16 Aug 13 01:20:41.252405 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Aug 13 01:20:41.272430 kernel: vmw_pvscsi: setting ring_pages to 8 Aug 13 01:20:41.272446 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:20:41.272457 kernel: AVX2 version of gcm_enc/dec engaged. Aug 13 01:20:41.272464 kernel: AES CTR mode by8 optimization enabled Aug 13 01:20:41.272471 kernel: vmw_pvscsi: enabling reqCallThreshold Aug 13 01:20:41.272478 kernel: vmw_pvscsi: driver-based request coalescing enabled Aug 13 01:20:41.272486 kernel: vmw_pvscsi: using MSI-X Aug 13 01:20:41.272493 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Aug 13 01:20:41.272563 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Aug 13 01:20:41.272633 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Aug 13 01:20:41.272696 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Aug 13 01:20:41.274462 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Aug 13 01:20:41.281425 kernel: libata version 3.00 loaded. Aug 13 01:20:41.283425 kernel: ata_piix 0000:00:07.1: version 2.13 Aug 13 01:20:41.296448 kernel: scsi host1: ata_piix Aug 13 01:20:41.296515 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Aug 13 01:20:41.298575 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:20:41.298641 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Aug 13 01:20:41.298704 kernel: sd 0:0:0:0: [sda] Cache data unavailable Aug 13 01:20:41.298763 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Aug 13 01:20:41.298822 kernel: scsi host2: ata_piix Aug 13 01:20:41.298880 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Aug 13 01:20:41.298888 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Aug 13 01:20:41.298897 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:20:41.298904 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:20:41.470442 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Aug 13 01:20:41.474471 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Aug 13 01:20:41.500431 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (474) Aug 13 01:20:41.504198 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Aug 13 01:20:41.506411 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Aug 13 01:20:41.507662 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Aug 13 01:20:41.527401 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 13 01:20:41.527412 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Aug 13 01:20:41.510237 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:20:41.514228 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Aug 13 01:20:41.514331 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Aug 13 01:20:41.514850 systemd[1]: Starting disk-uuid.service... Aug 13 01:20:41.540434 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:20:41.544426 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:20:42.546442 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:20:42.546484 disk-uuid[548]: The operation has completed successfully. Aug 13 01:20:42.581043 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:20:42.581323 systemd[1]: Finished disk-uuid.service. Aug 13 01:20:42.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.582072 systemd[1]: Starting verity-setup.service... Aug 13 01:20:42.591432 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 13 01:20:42.630772 systemd[1]: Found device dev-mapper-usr.device. Aug 13 01:20:42.631335 systemd[1]: Mounting sysusr-usr.mount... Aug 13 01:20:42.632009 systemd[1]: Finished verity-setup.service. Aug 13 01:20:42.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.688133 systemd[1]: Mounted sysusr-usr.mount. Aug 13 01:20:42.688426 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Aug 13 01:20:42.688663 systemd[1]: Starting afterburn-network-kargs.service... Aug 13 01:20:42.689068 systemd[1]: Starting ignition-setup.service... Aug 13 01:20:42.704779 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:20:42.704800 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:20:42.704809 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:20:42.709423 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:20:42.715031 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 01:20:42.720124 systemd[1]: Finished ignition-setup.service. Aug 13 01:20:42.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.720639 systemd[1]: Starting ignition-fetch-offline.service... Aug 13 01:20:42.767428 systemd[1]: Finished afterburn-network-kargs.service. Aug 13 01:20:42.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.767934 systemd[1]: Starting parse-ip-for-networkd.service... Aug 13 01:20:42.813271 systemd[1]: Finished parse-ip-for-networkd.service. Aug 13 01:20:42.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.812000 audit: BPF prog-id=9 op=LOAD Aug 13 01:20:42.814180 systemd[1]: Starting systemd-networkd.service... Aug 13 01:20:42.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.829046 systemd-networkd[732]: lo: Link UP Aug 13 01:20:42.829049 systemd-networkd[732]: lo: Gained carrier Aug 13 01:20:42.829331 systemd-networkd[732]: Enumeration completed Aug 13 01:20:42.829379 systemd[1]: Started systemd-networkd.service. Aug 13 01:20:42.829518 systemd[1]: Reached target network.target. Aug 13 01:20:42.829961 systemd[1]: Starting iscsiuio.service... Aug 13 01:20:42.830124 systemd-networkd[732]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Aug 13 01:20:42.834574 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 01:20:42.834686 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 01:20:42.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.833617 systemd[1]: Started iscsiuio.service. Aug 13 01:20:42.837479 iscsid[738]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:20:42.837479 iscsid[738]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Aug 13 01:20:42.837479 iscsid[738]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Aug 13 01:20:42.837479 iscsid[738]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Aug 13 01:20:42.837479 iscsid[738]: If using hardware iscsi like qla4xxx this message can be ignored. Aug 13 01:20:42.837479 iscsid[738]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Aug 13 01:20:42.837479 iscsid[738]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Aug 13 01:20:42.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.834213 systemd[1]: Starting iscsid.service... Aug 13 01:20:42.835326 systemd-networkd[732]: ens192: Link UP Aug 13 01:20:42.835328 systemd-networkd[732]: ens192: Gained carrier Aug 13 01:20:42.837382 systemd[1]: Started iscsid.service. Aug 13 01:20:42.838590 systemd[1]: Starting dracut-initqueue.service... Aug 13 01:20:42.845295 systemd[1]: Finished dracut-initqueue.service. Aug 13 01:20:42.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.845435 systemd[1]: Reached target remote-fs-pre.target. Aug 13 01:20:42.845540 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:20:42.845693 systemd[1]: Reached target remote-fs.target. Aug 13 01:20:42.846218 systemd[1]: Starting dracut-pre-mount.service... Aug 13 01:20:42.851314 systemd[1]: Finished dracut-pre-mount.service. Aug 13 01:20:42.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.853982 ignition[604]: Ignition 2.14.0 Aug 13 01:20:42.853988 ignition[604]: Stage: fetch-offline Aug 13 01:20:42.854015 ignition[604]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:20:42.854030 ignition[604]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:20:42.862135 ignition[604]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:20:42.862217 ignition[604]: parsed url from cmdline: "" Aug 13 01:20:42.862219 ignition[604]: no config URL provided Aug 13 01:20:42.862224 ignition[604]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:20:42.862228 ignition[604]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:20:42.862656 ignition[604]: config successfully fetched Aug 13 01:20:42.862675 ignition[604]: parsing config with SHA512: 16f48d05c7caa1510b56a8b5c0bc599873d6f2503a2135c389088a97957f95fa885e76e8a30854afba85c3140a4db4c90417dcffce8daf2e3da63613a64d346f Aug 13 01:20:42.867691 unknown[604]: fetched base config from "system" Aug 13 01:20:42.867859 unknown[604]: fetched user config from "vmware" Aug 13 01:20:42.868280 ignition[604]: fetch-offline: fetch-offline passed Aug 13 01:20:42.868442 ignition[604]: Ignition finished successfully Aug 13 01:20:42.869007 systemd[1]: Finished ignition-fetch-offline.service. Aug 13 01:20:42.869156 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 13 01:20:42.867000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.869559 systemd[1]: Starting ignition-kargs.service... Aug 13 01:20:42.874522 ignition[752]: Ignition 2.14.0 Aug 13 01:20:42.874530 ignition[752]: Stage: kargs Aug 13 01:20:42.874584 ignition[752]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:20:42.874594 ignition[752]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:20:42.875793 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:20:42.877227 ignition[752]: kargs: kargs passed Aug 13 01:20:42.877251 ignition[752]: Ignition finished successfully Aug 13 01:20:42.878097 systemd[1]: Finished ignition-kargs.service. Aug 13 01:20:42.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.878644 systemd[1]: Starting ignition-disks.service... Aug 13 01:20:42.882574 ignition[758]: Ignition 2.14.0 Aug 13 01:20:42.882762 ignition[758]: Stage: disks Aug 13 01:20:42.882962 ignition[758]: reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:20:42.883109 ignition[758]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:20:42.884323 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:20:42.885961 ignition[758]: disks: disks passed Aug 13 01:20:42.886100 ignition[758]: Ignition finished successfully Aug 13 01:20:42.886609 systemd[1]: Finished ignition-disks.service. Aug 13 01:20:42.886768 systemd[1]: Reached target initrd-root-device.target. Aug 13 01:20:42.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.886910 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:20:42.887066 systemd[1]: Reached target local-fs.target. Aug 13 01:20:42.887221 systemd[1]: Reached target sysinit.target. Aug 13 01:20:42.887372 systemd[1]: Reached target basic.target. Aug 13 01:20:42.887959 systemd[1]: Starting systemd-fsck-root.service... Aug 13 01:20:42.898998 systemd-fsck[766]: ROOT: clean, 629/1628000 files, 124064/1617920 blocks Aug 13 01:20:42.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.900197 systemd[1]: Finished systemd-fsck-root.service. Aug 13 01:20:42.900683 systemd[1]: Mounting sysroot.mount... Aug 13 01:20:42.910019 systemd[1]: Mounted sysroot.mount. Aug 13 01:20:42.910296 systemd[1]: Reached target initrd-root-fs.target. Aug 13 01:20:42.910447 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Aug 13 01:20:42.911330 systemd[1]: Mounting sysroot-usr.mount... Aug 13 01:20:42.911964 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Aug 13 01:20:42.912186 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:20:42.912460 systemd[1]: Reached target ignition-diskful.target. Aug 13 01:20:42.913306 systemd[1]: Mounted sysroot-usr.mount. Aug 13 01:20:42.913982 systemd[1]: Starting initrd-setup-root.service... Aug 13 01:20:42.916914 initrd-setup-root[776]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:20:42.920923 initrd-setup-root[784]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:20:42.923073 initrd-setup-root[792]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:20:42.925217 initrd-setup-root[800]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:20:42.949477 systemd[1]: Finished initrd-setup-root.service. Aug 13 01:20:42.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.949972 systemd[1]: Starting ignition-mount.service... Aug 13 01:20:42.950380 systemd[1]: Starting sysroot-boot.service... Aug 13 01:20:42.953852 bash[817]: umount: /sysroot/usr/share/oem: not mounted. Aug 13 01:20:42.958662 ignition[818]: INFO : Ignition 2.14.0 Aug 13 01:20:42.958851 ignition[818]: INFO : Stage: mount Aug 13 01:20:42.959017 ignition[818]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:20:42.959163 ignition[818]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:20:42.960570 ignition[818]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:20:42.962108 ignition[818]: INFO : mount: mount passed Aug 13 01:20:42.962216 ignition[818]: INFO : Ignition finished successfully Aug 13 01:20:42.962971 systemd[1]: Finished ignition-mount.service. Aug 13 01:20:42.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:42.968784 systemd[1]: Finished sysroot-boot.service. Aug 13 01:20:42.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:43.646913 systemd[1]: Mounting sysroot-usr-share-oem.mount... Aug 13 01:20:43.656548 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (827) Aug 13 01:20:43.656573 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:20:43.656585 kernel: BTRFS info (device sda6): using free space tree Aug 13 01:20:43.658437 kernel: BTRFS info (device sda6): has skinny extents Aug 13 01:20:43.661428 kernel: BTRFS info (device sda6): enabling ssd optimizations Aug 13 01:20:43.662279 systemd[1]: Mounted sysroot-usr-share-oem.mount. Aug 13 01:20:43.662899 systemd[1]: Starting ignition-files.service... Aug 13 01:20:43.672760 ignition[847]: INFO : Ignition 2.14.0 Aug 13 01:20:43.673009 ignition[847]: INFO : Stage: files Aug 13 01:20:43.673176 ignition[847]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:20:43.673322 ignition[847]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:20:43.674904 ignition[847]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:20:43.677061 ignition[847]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:20:43.677579 ignition[847]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:20:43.677724 ignition[847]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:20:43.679776 ignition[847]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:20:43.680001 ignition[847]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:20:43.680728 unknown[847]: wrote ssh authorized keys file for user: core Aug 13 01:20:43.680909 ignition[847]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:20:43.681310 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 01:20:43.681472 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 01:20:43.681472 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:20:43.681472 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:20:43.723271 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 01:20:43.895872 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:20:43.896215 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:20:43.896660 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:20:43.896907 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:20:43.897245 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:20:43.897502 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:20:43.897823 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:20:43.898071 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:20:43.898384 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:20:43.898867 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:20:43.899180 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:20:43.899464 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:20:43.899807 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:20:43.900288 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Aug 13 01:20:43.900590 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Aug 13 01:20:43.906817 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3971723701" Aug 13 01:20:43.907098 ignition[847]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3971723701": device or resource busy Aug 13 01:20:43.907376 ignition[847]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3971723701", trying btrfs: device or resource busy Aug 13 01:20:43.907678 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3971723701" Aug 13 01:20:43.910834 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3971723701" Aug 13 01:20:43.911530 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3971723701" Aug 13 01:20:43.912492 systemd[1]: mnt-oem3971723701.mount: Deactivated successfully. Aug 13 01:20:43.913184 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3971723701" Aug 13 01:20:43.913413 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Aug 13 01:20:43.913658 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:20:43.913949 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:20:44.387658 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Aug 13 01:20:44.471782 systemd-networkd[732]: ens192: Gained IPv6LL Aug 13 01:20:44.648290 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:20:44.648626 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 01:20:44.648626 ignition[847]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Aug 13 01:20:44.648626 ignition[847]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Aug 13 01:20:44.648626 ignition[847]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Aug 13 01:20:44.648626 ignition[847]: INFO : files: op(12): [started] processing unit "containerd.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(12): [finished] processing unit "containerd.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(18): [started] setting preset to enabled for "vmtoolsd.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(18): [finished] setting preset to enabled for "vmtoolsd.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Aug 13 01:20:44.649791 ignition[847]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:20:44.701451 ignition[847]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 13 01:20:44.701737 ignition[847]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Aug 13 01:20:44.701737 ignition[847]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:20:44.701737 ignition[847]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:20:44.701737 ignition[847]: INFO : files: files passed Aug 13 01:20:44.701737 ignition[847]: INFO : Ignition finished successfully Aug 13 01:20:44.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.702237 systemd[1]: Finished ignition-files.service. Aug 13 01:20:44.703579 systemd[1]: Starting initrd-setup-root-after-ignition.service... Aug 13 01:20:44.703702 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Aug 13 01:20:44.708025 initrd-setup-root-after-ignition[873]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:20:44.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.704204 systemd[1]: Starting ignition-quench.service... Aug 13 01:20:44.706160 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:20:44.706204 systemd[1]: Finished ignition-quench.service. Aug 13 01:20:44.708112 systemd[1]: Finished initrd-setup-root-after-ignition.service. Aug 13 01:20:44.708248 systemd[1]: Reached target ignition-complete.target. Aug 13 01:20:44.708673 systemd[1]: Starting initrd-parse-etc.service... Aug 13 01:20:44.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.716063 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:20:44.716108 systemd[1]: Finished initrd-parse-etc.service. Aug 13 01:20:44.716252 systemd[1]: Reached target initrd-fs.target. Aug 13 01:20:44.716338 systemd[1]: Reached target initrd.target. Aug 13 01:20:44.716448 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Aug 13 01:20:44.716842 systemd[1]: Starting dracut-pre-pivot.service... Aug 13 01:20:44.721000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.722968 systemd[1]: Finished dracut-pre-pivot.service. Aug 13 01:20:44.723394 systemd[1]: Starting initrd-cleanup.service... Aug 13 01:20:44.728522 systemd[1]: Stopped target nss-lookup.target. Aug 13 01:20:44.728781 systemd[1]: Stopped target remote-cryptsetup.target. Aug 13 01:20:44.729048 systemd[1]: Stopped target timers.target. Aug 13 01:20:44.729289 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:20:44.729514 systemd[1]: Stopped dracut-pre-pivot.service. Aug 13 01:20:44.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.729855 systemd[1]: Stopped target initrd.target. Aug 13 01:20:44.730100 systemd[1]: Stopped target basic.target. Aug 13 01:20:44.730339 systemd[1]: Stopped target ignition-complete.target. Aug 13 01:20:44.730617 systemd[1]: Stopped target ignition-diskful.target. Aug 13 01:20:44.730868 systemd[1]: Stopped target initrd-root-device.target. Aug 13 01:20:44.731125 systemd[1]: Stopped target remote-fs.target. Aug 13 01:20:44.731365 systemd[1]: Stopped target remote-fs-pre.target. Aug 13 01:20:44.731627 systemd[1]: Stopped target sysinit.target. Aug 13 01:20:44.731874 systemd[1]: Stopped target local-fs.target. Aug 13 01:20:44.732114 systemd[1]: Stopped target local-fs-pre.target. Aug 13 01:20:44.732361 systemd[1]: Stopped target swap.target. Aug 13 01:20:44.732585 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:20:44.732774 systemd[1]: Stopped dracut-pre-mount.service. Aug 13 01:20:44.731000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.733109 systemd[1]: Stopped target cryptsetup.target. Aug 13 01:20:44.733332 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:20:44.733556 systemd[1]: Stopped dracut-initqueue.service. Aug 13 01:20:44.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.733855 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:20:44.734048 systemd[1]: Stopped ignition-fetch-offline.service. Aug 13 01:20:44.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.734372 systemd[1]: Stopped target paths.target. Aug 13 01:20:44.734606 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:20:44.736584 systemd[1]: Stopped systemd-ask-password-console.path. Aug 13 01:20:44.736762 systemd[1]: Stopped target slices.target. Aug 13 01:20:44.736951 systemd[1]: Stopped target sockets.target. Aug 13 01:20:44.737102 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:20:44.737142 systemd[1]: Closed iscsid.socket. Aug 13 01:20:44.737343 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:20:44.737381 systemd[1]: Closed iscsiuio.socket. Aug 13 01:20:44.737599 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:20:44.737654 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Aug 13 01:20:44.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.737887 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:20:44.737939 systemd[1]: Stopped ignition-files.service. Aug 13 01:20:44.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.742000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.738521 systemd[1]: Stopping ignition-mount.service... Aug 13 01:20:44.744600 ignition[886]: INFO : Ignition 2.14.0 Aug 13 01:20:44.744600 ignition[886]: INFO : Stage: umount Aug 13 01:20:44.744600 ignition[886]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Aug 13 01:20:44.744600 ignition[886]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Aug 13 01:20:44.744600 ignition[886]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Aug 13 01:20:44.738988 systemd[1]: Stopping sysroot-boot.service... Aug 13 01:20:44.739095 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:20:44.739157 systemd[1]: Stopped systemd-udev-trigger.service. Aug 13 01:20:44.739337 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:20:44.739389 systemd[1]: Stopped dracut-pre-trigger.service. Aug 13 01:20:44.741685 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:20:44.741730 systemd[1]: Finished initrd-cleanup.service. Aug 13 01:20:44.747457 ignition[886]: INFO : umount: umount passed Aug 13 01:20:44.747457 ignition[886]: INFO : Ignition finished successfully Aug 13 01:20:44.748078 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:20:44.748127 systemd[1]: Stopped ignition-mount.service. Aug 13 01:20:44.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.751354 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:20:44.751796 systemd[1]: Stopped target network.target. Aug 13 01:20:44.751994 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:20:44.752138 systemd[1]: Stopped ignition-disks.service. Aug 13 01:20:44.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.752381 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:20:44.752531 systemd[1]: Stopped ignition-kargs.service. Aug 13 01:20:44.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.752770 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:20:44.752948 systemd[1]: Stopped ignition-setup.service. Aug 13 01:20:44.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.753490 systemd[1]: Stopping systemd-networkd.service... Aug 13 01:20:44.753769 systemd[1]: Stopping systemd-resolved.service... Aug 13 01:20:44.756238 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:20:44.756425 systemd[1]: Stopped systemd-networkd.service. Aug 13 01:20:44.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.756733 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:20:44.757070 systemd[1]: Closed systemd-networkd.socket. Aug 13 01:20:44.757632 systemd[1]: Stopping network-cleanup.service... Aug 13 01:20:44.758619 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:20:44.758777 systemd[1]: Stopped parse-ip-for-networkd.service. Aug 13 01:20:44.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.759043 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Aug 13 01:20:44.759195 systemd[1]: Stopped afterburn-network-kargs.service. Aug 13 01:20:44.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.759463 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:20:44.759607 systemd[1]: Stopped systemd-sysctl.service. Aug 13 01:20:44.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.759869 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:20:44.760017 systemd[1]: Stopped systemd-modules-load.service. Aug 13 01:20:44.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.760306 systemd[1]: Stopping systemd-udevd.service... Aug 13 01:20:44.760000 audit: BPF prog-id=9 op=UNLOAD Aug 13 01:20:44.762180 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:20:44.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.762459 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:20:44.762509 systemd[1]: Stopped systemd-resolved.service. Aug 13 01:20:44.763000 audit: BPF prog-id=6 op=UNLOAD Aug 13 01:20:44.764860 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:20:44.764914 systemd[1]: Stopped network-cleanup.service. Aug 13 01:20:44.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.765872 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:20:44.765939 systemd[1]: Stopped systemd-udevd.service. Aug 13 01:20:44.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.766276 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:20:44.766300 systemd[1]: Closed systemd-udevd-control.socket. Aug 13 01:20:44.766521 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:20:44.766538 systemd[1]: Closed systemd-udevd-kernel.socket. Aug 13 01:20:44.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.766671 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:20:44.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.766692 systemd[1]: Stopped dracut-pre-udev.service. Aug 13 01:20:44.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.766862 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:20:44.766881 systemd[1]: Stopped dracut-cmdline.service. Aug 13 01:20:44.767018 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:20:44.767036 systemd[1]: Stopped dracut-cmdline-ask.service. Aug 13 01:20:44.767513 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Aug 13 01:20:44.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.767697 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:20:44.767725 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Aug 13 01:20:44.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.767940 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:20:44.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.767961 systemd[1]: Stopped kmod-static-nodes.service. Aug 13 01:20:44.768162 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:20:44.768181 systemd[1]: Stopped systemd-vconsole-setup.service. Aug 13 01:20:44.769127 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:20:44.772301 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:20:44.772347 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Aug 13 01:20:44.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.807611 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:20:44.807688 systemd[1]: Stopped sysroot-boot.service. Aug 13 01:20:44.811943 kernel: kauditd_printk_skb: 63 callbacks suppressed Aug 13 01:20:44.811961 kernel: audit: type=1131 audit(1755048044.806:74): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.807954 systemd[1]: Reached target initrd-switch-root.target. Aug 13 01:20:44.812022 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:20:44.815834 kernel: audit: type=1131 audit(1755048044.810:75): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:44.812053 systemd[1]: Stopped initrd-setup-root.service. Aug 13 01:20:44.812756 systemd[1]: Starting initrd-switch-root.service... Aug 13 01:20:44.830681 systemd[1]: Switching root. Aug 13 01:20:44.831000 audit: BPF prog-id=5 op=UNLOAD Aug 13 01:20:44.832000 audit: BPF prog-id=4 op=UNLOAD Aug 13 01:20:44.835156 kernel: audit: type=1334 audit(1755048044.831:76): prog-id=5 op=UNLOAD Aug 13 01:20:44.835184 kernel: audit: type=1334 audit(1755048044.832:77): prog-id=4 op=UNLOAD Aug 13 01:20:44.833000 audit: BPF prog-id=3 op=UNLOAD Aug 13 01:20:44.833000 audit: BPF prog-id=8 op=UNLOAD Aug 13 01:20:44.838469 kernel: audit: type=1334 audit(1755048044.833:78): prog-id=3 op=UNLOAD Aug 13 01:20:44.838488 kernel: audit: type=1334 audit(1755048044.833:79): prog-id=8 op=UNLOAD Aug 13 01:20:44.838499 kernel: audit: type=1334 audit(1755048044.833:80): prog-id=7 op=UNLOAD Aug 13 01:20:44.833000 audit: BPF prog-id=7 op=UNLOAD Aug 13 01:20:44.855514 iscsid[738]: iscsid shutting down. Aug 13 01:20:44.855677 systemd-journald[216]: Journal stopped Aug 13 01:20:47.028253 systemd-journald[216]: Received SIGTERM from PID 1 (n/a). Aug 13 01:20:47.028272 kernel: audit: type=1335 audit(1755048044.854:81): pid=216 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=kernel comm="systemd-journal" exe=2F7573722F6C69622F73797374656D642F73797374656D642D6A6F75726E616C64202864656C6574656429 nl-mcgrp=1 op=disconnect res=1 Aug 13 01:20:47.028281 kernel: SELinux: Class mctp_socket not defined in policy. Aug 13 01:20:47.028287 kernel: SELinux: Class anon_inode not defined in policy. Aug 13 01:20:47.028292 kernel: SELinux: the above unknown classes and permissions will be allowed Aug 13 01:20:47.028299 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:20:47.028305 kernel: SELinux: policy capability open_perms=1 Aug 13 01:20:47.028311 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:20:47.028317 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:20:47.028322 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:20:47.028328 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:20:47.028333 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:20:47.028340 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:20:47.029749 kernel: audit: type=1403 audit(1755048045.220:82): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:20:47.029761 systemd[1]: Successfully loaded SELinux policy in 40.785ms. Aug 13 01:20:47.029770 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.651ms. Aug 13 01:20:47.029780 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Aug 13 01:20:47.029788 systemd[1]: Detected virtualization vmware. Aug 13 01:20:47.029796 systemd[1]: Detected architecture x86-64. Aug 13 01:20:47.029802 systemd[1]: Detected first boot. Aug 13 01:20:47.029809 systemd[1]: Initializing machine ID from random generator. Aug 13 01:20:47.029816 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Aug 13 01:20:47.029823 kernel: audit: type=1400 audit(1755048045.359:83): avc: denied { associate } for pid=939 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Aug 13 01:20:47.029831 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:20:47.029837 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:20:47.029845 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:20:47.029852 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:20:47.029859 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:20:47.029866 systemd[1]: Unnecessary job was removed for dev-sda6.device. Aug 13 01:20:47.029873 systemd[1]: Created slice system-addon\x2dconfig.slice. Aug 13 01:20:47.029881 systemd[1]: Created slice system-addon\x2drun.slice. Aug 13 01:20:47.029888 systemd[1]: Created slice system-getty.slice. Aug 13 01:20:47.029894 systemd[1]: Created slice system-modprobe.slice. Aug 13 01:20:47.029901 systemd[1]: Created slice system-serial\x2dgetty.slice. Aug 13 01:20:47.029907 systemd[1]: Created slice system-system\x2dcloudinit.slice. Aug 13 01:20:47.029914 systemd[1]: Created slice system-systemd\x2dfsck.slice. Aug 13 01:20:47.029920 systemd[1]: Created slice user.slice. Aug 13 01:20:47.029928 systemd[1]: Started systemd-ask-password-console.path. Aug 13 01:20:47.029934 systemd[1]: Started systemd-ask-password-wall.path. Aug 13 01:20:47.029941 systemd[1]: Set up automount boot.automount. Aug 13 01:20:47.029949 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Aug 13 01:20:47.029956 systemd[1]: Reached target integritysetup.target. Aug 13 01:20:47.029963 systemd[1]: Reached target remote-cryptsetup.target. Aug 13 01:20:47.029970 systemd[1]: Reached target remote-fs.target. Aug 13 01:20:47.029976 systemd[1]: Reached target slices.target. Aug 13 01:20:47.029983 systemd[1]: Reached target swap.target. Aug 13 01:20:47.029989 systemd[1]: Reached target torcx.target. Aug 13 01:20:47.029997 systemd[1]: Reached target veritysetup.target. Aug 13 01:20:47.030004 systemd[1]: Listening on systemd-coredump.socket. Aug 13 01:20:47.030011 systemd[1]: Listening on systemd-initctl.socket. Aug 13 01:20:47.030018 systemd[1]: Listening on systemd-journald-audit.socket. Aug 13 01:20:47.030025 systemd[1]: Listening on systemd-journald-dev-log.socket. Aug 13 01:20:47.030031 systemd[1]: Listening on systemd-journald.socket. Aug 13 01:20:47.030038 systemd[1]: Listening on systemd-networkd.socket. Aug 13 01:20:47.030046 systemd[1]: Listening on systemd-udevd-control.socket. Aug 13 01:20:47.030053 systemd[1]: Listening on systemd-udevd-kernel.socket. Aug 13 01:20:47.030060 systemd[1]: Listening on systemd-userdbd.socket. Aug 13 01:20:47.030067 systemd[1]: Mounting dev-hugepages.mount... Aug 13 01:20:47.030074 systemd[1]: Mounting dev-mqueue.mount... Aug 13 01:20:47.030081 systemd[1]: Mounting media.mount... Aug 13 01:20:47.030089 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:20:47.030095 systemd[1]: Mounting sys-kernel-debug.mount... Aug 13 01:20:47.030102 systemd[1]: Mounting sys-kernel-tracing.mount... Aug 13 01:20:47.030376 systemd[1]: Mounting tmp.mount... Aug 13 01:20:47.030386 systemd[1]: Starting flatcar-tmpfiles.service... Aug 13 01:20:47.030393 systemd[1]: Starting ignition-delete-config.service... Aug 13 01:20:47.030399 systemd[1]: Starting kmod-static-nodes.service... Aug 13 01:20:47.030406 systemd[1]: Starting modprobe@configfs.service... Aug 13 01:20:47.030423 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:20:47.030431 systemd[1]: Starting modprobe@drm.service... Aug 13 01:20:47.030439 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:20:47.030445 systemd[1]: Starting modprobe@fuse.service... Aug 13 01:20:47.030452 systemd[1]: Starting modprobe@loop.service... Aug 13 01:20:47.030459 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:20:47.030467 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 01:20:47.030474 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Aug 13 01:20:47.030480 systemd[1]: Starting systemd-journald.service... Aug 13 01:20:47.030489 kernel: fuse: init (API version 7.34) Aug 13 01:20:47.030496 systemd[1]: Starting systemd-modules-load.service... Aug 13 01:20:47.030505 systemd[1]: Starting systemd-network-generator.service... Aug 13 01:20:47.030515 systemd[1]: Starting systemd-remount-fs.service... Aug 13 01:20:47.030522 systemd[1]: Starting systemd-udev-trigger.service... Aug 13 01:20:47.030529 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:20:47.030536 systemd[1]: Mounted dev-hugepages.mount. Aug 13 01:20:47.030543 systemd[1]: Mounted dev-mqueue.mount. Aug 13 01:20:47.030552 systemd[1]: Mounted media.mount. Aug 13 01:20:47.030559 systemd[1]: Mounted sys-kernel-debug.mount. Aug 13 01:20:47.030566 systemd[1]: Mounted sys-kernel-tracing.mount. Aug 13 01:20:47.030573 systemd[1]: Mounted tmp.mount. Aug 13 01:20:47.030579 systemd[1]: Finished kmod-static-nodes.service. Aug 13 01:20:47.030586 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:20:47.030594 systemd[1]: Finished modprobe@configfs.service. Aug 13 01:20:47.030601 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:20:47.030608 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:20:47.030616 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:20:47.030623 systemd[1]: Finished modprobe@drm.service. Aug 13 01:20:47.030632 systemd-journald[1050]: Journal started Aug 13 01:20:47.030660 systemd-journald[1050]: Runtime Journal (/run/log/journal/2746d1df7fe94cd6ae7b718f9d7404ee) is 4.8M, max 38.8M, 34.0M free. Aug 13 01:20:46.941000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Aug 13 01:20:46.941000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Aug 13 01:20:47.022000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Aug 13 01:20:47.022000 audit[1050]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffde1fd1560 a2=4000 a3=7ffde1fd15fc items=0 ppid=1 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:20:47.022000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Aug 13 01:20:47.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.027000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.031536 jq[1024]: true Aug 13 01:20:47.031974 systemd[1]: Started systemd-journald.service. Aug 13 01:20:47.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.032820 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:20:47.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.035744 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:20:47.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.037923 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:20:47.038009 systemd[1]: Finished modprobe@fuse.service. Aug 13 01:20:47.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.038286 systemd[1]: Finished systemd-modules-load.service. Aug 13 01:20:47.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.038528 systemd[1]: Finished systemd-network-generator.service. Aug 13 01:20:47.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.038754 systemd[1]: Finished systemd-remount-fs.service. Aug 13 01:20:47.039047 systemd[1]: Reached target network-pre.target. Aug 13 01:20:47.039884 systemd[1]: Mounting sys-fs-fuse-connections.mount... Aug 13 01:20:47.043647 jq[1062]: true Aug 13 01:20:47.040658 systemd[1]: Mounting sys-kernel-config.mount... Aug 13 01:20:47.040767 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:20:47.047744 systemd[1]: Starting systemd-hwdb-update.service... Aug 13 01:20:47.049077 systemd[1]: Starting systemd-journal-flush.service... Aug 13 01:20:47.049196 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:20:47.060890 systemd-journald[1050]: Time spent on flushing to /var/log/journal/2746d1df7fe94cd6ae7b718f9d7404ee is 67.129ms for 1923 entries. Aug 13 01:20:47.060890 systemd-journald[1050]: System Journal (/var/log/journal/2746d1df7fe94cd6ae7b718f9d7404ee) is 8.0M, max 584.8M, 576.8M free. Aug 13 01:20:47.163782 systemd-journald[1050]: Received client request to flush runtime journal. Aug 13 01:20:47.163825 kernel: loop: module loaded Aug 13 01:20:47.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.063000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.077000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.049919 systemd[1]: Starting systemd-random-seed.service... Aug 13 01:20:47.050963 systemd[1]: Starting systemd-sysctl.service... Aug 13 01:20:47.052910 systemd[1]: Mounted sys-fs-fuse-connections.mount. Aug 13 01:20:47.053045 systemd[1]: Mounted sys-kernel-config.mount. Aug 13 01:20:47.061374 systemd[1]: Finished flatcar-tmpfiles.service. Aug 13 01:20:47.062864 systemd[1]: Starting systemd-sysusers.service... Aug 13 01:20:47.064973 systemd[1]: Finished systemd-random-seed.service. Aug 13 01:20:47.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.065110 systemd[1]: Reached target first-boot-complete.target. Aug 13 01:20:47.074833 systemd[1]: Finished systemd-sysctl.service. Aug 13 01:20:47.078566 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:20:47.078658 systemd[1]: Finished modprobe@loop.service. Aug 13 01:20:47.078820 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:20:47.108582 systemd[1]: Finished systemd-sysusers.service. Aug 13 01:20:47.109649 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Aug 13 01:20:47.151299 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Aug 13 01:20:47.165519 udevadm[1107]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 01:20:47.152866 systemd[1]: Finished systemd-udev-trigger.service. Aug 13 01:20:47.153801 systemd[1]: Starting systemd-udev-settle.service... Aug 13 01:20:47.165098 systemd[1]: Finished systemd-journal-flush.service. Aug 13 01:20:47.192151 ignition[1083]: Ignition 2.14.0 Aug 13 01:20:47.192725 ignition[1083]: deleting config from guestinfo properties Aug 13 01:20:47.196325 ignition[1083]: Successfully deleted config Aug 13 01:20:47.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.196938 systemd[1]: Finished ignition-delete-config.service. Aug 13 01:20:47.489847 systemd[1]: Finished systemd-hwdb-update.service. Aug 13 01:20:47.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.491100 systemd[1]: Starting systemd-udevd.service... Aug 13 01:20:47.505675 systemd-udevd[1111]: Using default interface naming scheme 'v252'. Aug 13 01:20:47.524252 systemd[1]: Started systemd-udevd.service. Aug 13 01:20:47.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.525339 systemd[1]: Starting systemd-networkd.service... Aug 13 01:20:47.532055 systemd[1]: Starting systemd-userdbd.service... Aug 13 01:20:47.553589 systemd[1]: Started systemd-userdbd.service. Aug 13 01:20:47.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.557844 systemd[1]: Found device dev-ttyS0.device. Aug 13 01:20:47.598434 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Aug 13 01:20:47.603440 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:20:47.609022 systemd-networkd[1112]: lo: Link UP Aug 13 01:20:47.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.609027 systemd-networkd[1112]: lo: Gained carrier Aug 13 01:20:47.609307 systemd-networkd[1112]: Enumeration completed Aug 13 01:20:47.609366 systemd-networkd[1112]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Aug 13 01:20:47.609381 systemd[1]: Started systemd-networkd.service. Aug 13 01:20:47.612475 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Aug 13 01:20:47.612589 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Aug 13 01:20:47.613896 systemd-networkd[1112]: ens192: Link UP Aug 13 01:20:47.614027 systemd-networkd[1112]: ens192: Gained carrier Aug 13 01:20:47.614426 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Aug 13 01:20:47.661445 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Aug 13 01:20:47.666291 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Aug 13 01:20:47.664457 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Aug 13 01:20:47.667422 kernel: Guest personality initialized and is active Aug 13 01:20:47.672438 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:20:47.672467 kernel: Initialized host personality Aug 13 01:20:47.661000 audit[1123]: AVC avc: denied { confidentiality } for pid=1123 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Aug 13 01:20:47.661000 audit[1123]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=557c49b984f0 a1=338ac a2=7f4be4c4abc5 a3=5 items=110 ppid=1111 pid=1123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:20:47.661000 audit: CWD cwd="/" Aug 13 01:20:47.661000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=1 name=(null) inode=23448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=2 name=(null) inode=23448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=3 name=(null) inode=23449 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=4 name=(null) inode=23448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=5 name=(null) inode=23450 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=6 name=(null) inode=23448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=7 name=(null) inode=23451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=8 name=(null) inode=23451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=9 name=(null) inode=23452 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=10 name=(null) inode=23451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=11 name=(null) inode=23453 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=12 name=(null) inode=23451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=13 name=(null) inode=23454 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=14 name=(null) inode=23451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=15 name=(null) inode=23455 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=16 name=(null) inode=23451 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=17 name=(null) inode=23456 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=18 name=(null) inode=23448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=19 name=(null) inode=23457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=20 name=(null) inode=23457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=21 name=(null) inode=23458 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=22 name=(null) inode=23457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=23 name=(null) inode=23459 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=24 name=(null) inode=23457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=25 name=(null) inode=23460 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=26 name=(null) inode=23457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=27 name=(null) inode=23461 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=28 name=(null) inode=23457 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=29 name=(null) inode=23462 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=30 name=(null) inode=23448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=31 name=(null) inode=23463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=32 name=(null) inode=23463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=33 name=(null) inode=23464 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=34 name=(null) inode=23463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=35 name=(null) inode=23465 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=36 name=(null) inode=23463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=37 name=(null) inode=23466 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=38 name=(null) inode=23463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=39 name=(null) inode=23467 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=40 name=(null) inode=23463 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=41 name=(null) inode=23468 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=42 name=(null) inode=23448 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=43 name=(null) inode=23469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=44 name=(null) inode=23469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=45 name=(null) inode=23470 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=46 name=(null) inode=23469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=47 name=(null) inode=23471 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=48 name=(null) inode=23469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=49 name=(null) inode=23472 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=50 name=(null) inode=23469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=51 name=(null) inode=23473 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=52 name=(null) inode=23469 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=53 name=(null) inode=23474 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=55 name=(null) inode=23475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=56 name=(null) inode=23475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=57 name=(null) inode=23476 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=58 name=(null) inode=23475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=59 name=(null) inode=23477 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=60 name=(null) inode=23475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=61 name=(null) inode=23478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=62 name=(null) inode=23478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=63 name=(null) inode=23479 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=64 name=(null) inode=23478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=65 name=(null) inode=23480 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=66 name=(null) inode=23478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=67 name=(null) inode=23481 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=68 name=(null) inode=23478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=69 name=(null) inode=23482 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=70 name=(null) inode=23478 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=71 name=(null) inode=23483 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=72 name=(null) inode=23475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=73 name=(null) inode=23484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=74 name=(null) inode=23484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=75 name=(null) inode=23485 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=76 name=(null) inode=23484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=77 name=(null) inode=23486 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=78 name=(null) inode=23484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=79 name=(null) inode=23487 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=80 name=(null) inode=23484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=81 name=(null) inode=23488 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=82 name=(null) inode=23484 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=83 name=(null) inode=23489 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=84 name=(null) inode=23475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=85 name=(null) inode=23490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=86 name=(null) inode=23490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=87 name=(null) inode=23491 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=88 name=(null) inode=23490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=89 name=(null) inode=23492 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=90 name=(null) inode=23490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=91 name=(null) inode=23493 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=92 name=(null) inode=23490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=93 name=(null) inode=23494 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=94 name=(null) inode=23490 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=95 name=(null) inode=23495 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=96 name=(null) inode=23475 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=97 name=(null) inode=23496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=98 name=(null) inode=23496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=99 name=(null) inode=23497 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=100 name=(null) inode=23496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=101 name=(null) inode=23498 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=102 name=(null) inode=23496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=103 name=(null) inode=23499 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=104 name=(null) inode=23496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=105 name=(null) inode=23500 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=106 name=(null) inode=23496 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=107 name=(null) inode=23501 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PATH item=109 name=(null) inode=23502 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:20:47.661000 audit: PROCTITLE proctitle="(udev-worker)" Aug 13 01:20:47.686425 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Aug 13 01:20:47.701426 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 01:20:47.719441 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:20:47.720171 (udev-worker)[1113]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Aug 13 01:20:47.738779 systemd[1]: Finished systemd-udev-settle.service. Aug 13 01:20:47.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.739768 systemd[1]: Starting lvm2-activation-early.service... Aug 13 01:20:47.769381 lvm[1145]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:20:47.793978 systemd[1]: Finished lvm2-activation-early.service. Aug 13 01:20:47.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.794155 systemd[1]: Reached target cryptsetup.target. Aug 13 01:20:47.795207 systemd[1]: Starting lvm2-activation.service... Aug 13 01:20:47.798268 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 01:20:47.820919 systemd[1]: Finished lvm2-activation.service. Aug 13 01:20:47.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.821074 systemd[1]: Reached target local-fs-pre.target. Aug 13 01:20:47.821177 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:20:47.821193 systemd[1]: Reached target local-fs.target. Aug 13 01:20:47.821290 systemd[1]: Reached target machines.target. Aug 13 01:20:47.822281 systemd[1]: Starting ldconfig.service... Aug 13 01:20:47.822821 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:20:47.822852 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:20:47.823666 systemd[1]: Starting systemd-boot-update.service... Aug 13 01:20:47.824324 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Aug 13 01:20:47.825279 systemd[1]: Starting systemd-machine-id-commit.service... Aug 13 01:20:47.826305 systemd[1]: Starting systemd-sysext.service... Aug 13 01:20:47.829950 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1150 (bootctl) Aug 13 01:20:47.830627 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Aug 13 01:20:47.840272 systemd[1]: Unmounting usr-share-oem.mount... Aug 13 01:20:47.842464 systemd[1]: usr-share-oem.mount: Deactivated successfully. Aug 13 01:20:47.842582 systemd[1]: Unmounted usr-share-oem.mount. Aug 13 01:20:47.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:47.849955 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Aug 13 01:20:47.856430 kernel: loop0: detected capacity change from 0 to 221472 Aug 13 01:20:48.129619 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:20:48.130605 systemd[1]: Finished systemd-machine-id-commit.service. Aug 13 01:20:48.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.147429 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:20:48.197891 systemd-fsck[1163]: fsck.fat 4.2 (2021-01-31) Aug 13 01:20:48.197891 systemd-fsck[1163]: /dev/sda1: 789 files, 119324/258078 clusters Aug 13 01:20:48.201453 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:20:48.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.201745 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Aug 13 01:20:48.203062 systemd[1]: Mounting boot.mount... Aug 13 01:20:48.215022 systemd[1]: Mounted boot.mount. Aug 13 01:20:48.226467 systemd[1]: Finished systemd-boot-update.service. Aug 13 01:20:48.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.231521 (sd-sysext)[1167]: Using extensions 'kubernetes'. Aug 13 01:20:48.232359 (sd-sysext)[1167]: Merged extensions into '/usr'. Aug 13 01:20:48.241292 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:20:48.242227 systemd[1]: Mounting usr-share-oem.mount... Aug 13 01:20:48.242982 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:20:48.243842 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:20:48.244851 systemd[1]: Starting modprobe@loop.service... Aug 13 01:20:48.245003 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.245077 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:20:48.245145 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:20:48.245602 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:20:48.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.245678 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:20:48.246025 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:20:48.248153 systemd[1]: Mounted usr-share-oem.mount. Aug 13 01:20:48.248456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:20:48.248532 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:20:48.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.248902 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:20:48.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.249000 systemd[1]: Finished modprobe@loop.service. Aug 13 01:20:48.249262 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.249828 systemd[1]: Finished systemd-sysext.service. Aug 13 01:20:48.252997 systemd[1]: Starting ensure-sysext.service... Aug 13 01:20:48.255290 systemd[1]: Starting systemd-tmpfiles-setup.service... Aug 13 01:20:48.262556 systemd[1]: Reloading. Aug 13 01:20:48.265331 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Aug 13 01:20:48.266237 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:20:48.268026 systemd-tmpfiles[1186]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:20:48.308284 /usr/lib/systemd/system-generators/torcx-generator[1205]: time="2025-08-13T01:20:48Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:20:48.308299 /usr/lib/systemd/system-generators/torcx-generator[1205]: time="2025-08-13T01:20:48Z" level=info msg="torcx already run" Aug 13 01:20:48.375127 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:20:48.375136 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:20:48.387647 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:20:48.421556 systemd[1]: Finished systemd-tmpfiles-setup.service. Aug 13 01:20:48.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.423164 systemd[1]: Starting audit-rules.service... Aug 13 01:20:48.424101 systemd[1]: Starting clean-ca-certificates.service... Aug 13 01:20:48.425088 systemd[1]: Starting systemd-journal-catalog-update.service... Aug 13 01:20:48.426157 systemd[1]: Starting systemd-resolved.service... Aug 13 01:20:48.427447 systemd[1]: Starting systemd-timesyncd.service... Aug 13 01:20:48.428572 systemd[1]: Starting systemd-update-utmp.service... Aug 13 01:20:48.436573 systemd[1]: Finished clean-ca-certificates.service. Aug 13 01:20:48.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.436806 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:20:48.440907 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:20:48.443803 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:20:48.444603 systemd[1]: Starting modprobe@loop.service... Aug 13 01:20:48.444722 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.444813 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:20:48.444915 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:20:48.445341 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:20:48.445441 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:20:48.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.447630 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:20:48.447706 systemd[1]: Finished modprobe@loop.service. Aug 13 01:20:48.446000 audit[1279]: SYSTEM_BOOT pid=1279 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.448814 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:20:48.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.450155 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:20:48.450243 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:20:48.450538 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.456760 systemd[1]: Finished systemd-update-utmp.service. Aug 13 01:20:48.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.458958 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:20:48.459816 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:20:48.460564 systemd[1]: Starting modprobe@loop.service... Aug 13 01:20:48.460677 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.460743 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:20:48.460822 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:20:48.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.463935 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:20:48.464019 systemd[1]: Finished modprobe@loop.service. Aug 13 01:20:48.464641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:20:48.464716 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:20:48.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.465038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:20:48.465121 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:20:48.463000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.465333 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:20:48.465385 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.468383 systemd[1]: Starting modprobe@dm_mod.service... Aug 13 01:20:48.469191 systemd[1]: Starting modprobe@drm.service... Aug 13 01:20:48.470084 systemd[1]: Starting modprobe@efi_pstore.service... Aug 13 01:20:48.471936 systemd[1]: Starting modprobe@loop.service... Aug 13 01:20:48.472076 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.472142 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:20:48.473048 systemd[1]: Starting systemd-networkd-wait-online.service... Aug 13 01:20:48.473212 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:20:48.475606 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:20:48.475722 systemd[1]: Finished modprobe@dm_mod.service. Aug 13 01:20:48.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.478967 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:20:48.479045 systemd[1]: Finished modprobe@efi_pstore.service. Aug 13 01:20:48.479257 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:20:48.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.483370 systemd[1]: Finished ensure-sysext.service. Aug 13 01:20:48.486188 ldconfig[1149]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:20:48.489045 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:20:48.489126 systemd[1]: Finished modprobe@drm.service. Aug 13 01:20:48.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.489407 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:20:48.489499 systemd[1]: Finished modprobe@loop.service. Aug 13 01:20:48.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.489631 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Aug 13 01:20:48.490665 systemd[1]: Finished ldconfig.service. Aug 13 01:20:48.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.493724 systemd[1]: Finished systemd-journal-catalog-update.service. Aug 13 01:20:48.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.494795 systemd[1]: Starting systemd-update-done.service... Aug 13 01:20:48.504511 systemd[1]: Finished systemd-update-done.service. Aug 13 01:20:48.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:20:48.520000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Aug 13 01:20:48.520000 audit[1325]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd747bf990 a2=420 a3=0 items=0 ppid=1273 pid=1325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:20:48.520000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Aug 13 01:20:48.521901 augenrules[1325]: No rules Aug 13 01:20:48.521959 systemd[1]: Finished audit-rules.service. Aug 13 01:20:48.525872 systemd[1]: Started systemd-timesyncd.service. Aug 13 01:20:48.526036 systemd[1]: Reached target time-set.target. Aug 13 01:20:48.536117 systemd-resolved[1277]: Positive Trust Anchors: Aug 13 01:20:48.536125 systemd-resolved[1277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:20:48.536143 systemd-resolved[1277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Aug 13 01:20:48.554428 systemd-resolved[1277]: Defaulting to hostname 'linux'. Aug 13 01:20:48.555436 systemd[1]: Started systemd-resolved.service. Aug 13 01:20:48.555565 systemd[1]: Reached target network.target. Aug 13 01:20:48.555655 systemd[1]: Reached target nss-lookup.target. Aug 13 01:20:48.555745 systemd[1]: Reached target sysinit.target. Aug 13 01:20:48.555876 systemd[1]: Started motdgen.path. Aug 13 01:20:48.555973 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Aug 13 01:22:18.390374 systemd-timesyncd[1278]: Contacted time server 23.186.168.131:123 (0.flatcar.pool.ntp.org). Aug 13 01:22:18.390424 systemd[1]: Started logrotate.timer. Aug 13 01:22:18.390547 systemd[1]: Started mdadm.timer. Aug 13 01:22:18.390621 systemd-timesyncd[1278]: Initial clock synchronization to Wed 2025-08-13 01:22:18.390325 UTC. Aug 13 01:22:18.390627 systemd[1]: Started systemd-tmpfiles-clean.timer. Aug 13 01:22:18.390722 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:22:18.390740 systemd[1]: Reached target paths.target. Aug 13 01:22:18.390818 systemd[1]: Reached target timers.target. Aug 13 01:22:18.391038 systemd[1]: Listening on dbus.socket. Aug 13 01:22:18.391722 systemd-resolved[1277]: Clock change detected. Flushing caches. Aug 13 01:22:18.391936 systemd[1]: Starting docker.socket... Aug 13 01:22:18.392832 systemd[1]: Listening on sshd.socket. Aug 13 01:22:18.392966 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:22:18.393164 systemd[1]: Listening on docker.socket. Aug 13 01:22:18.393318 systemd[1]: Reached target sockets.target. Aug 13 01:22:18.393402 systemd[1]: Reached target basic.target. Aug 13 01:22:18.393551 systemd[1]: System is tainted: cgroupsv1 Aug 13 01:22:18.393577 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:22:18.393591 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Aug 13 01:22:18.394330 systemd[1]: Starting containerd.service... Aug 13 01:22:18.395187 systemd[1]: Starting dbus.service... Aug 13 01:22:18.396457 systemd[1]: Starting enable-oem-cloudinit.service... Aug 13 01:22:18.397383 systemd[1]: Starting extend-filesystems.service... Aug 13 01:22:18.405596 jq[1337]: false Aug 13 01:22:18.397506 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Aug 13 01:22:18.398248 systemd[1]: Starting motdgen.service... Aug 13 01:22:18.399421 systemd[1]: Starting prepare-helm.service... Aug 13 01:22:18.400278 systemd[1]: Starting ssh-key-proc-cmdline.service... Aug 13 01:22:18.402427 systemd[1]: Starting sshd-keygen.service... Aug 13 01:22:18.404765 systemd[1]: Starting systemd-logind.service... Aug 13 01:22:18.405729 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Aug 13 01:22:18.405766 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:22:18.406545 systemd[1]: Starting update-engine.service... Aug 13 01:22:18.408559 systemd[1]: Starting update-ssh-keys-after-ignition.service... Aug 13 01:22:18.416685 systemd[1]: Starting vmtoolsd.service... Aug 13 01:22:18.417702 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:22:18.417832 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Aug 13 01:22:18.418475 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:22:18.418608 systemd[1]: Finished ssh-key-proc-cmdline.service. Aug 13 01:22:18.421445 jq[1351]: true Aug 13 01:22:18.427029 jq[1361]: true Aug 13 01:22:18.437108 extend-filesystems[1339]: Found loop1 Aug 13 01:22:18.437483 extend-filesystems[1339]: Found sda Aug 13 01:22:18.437625 extend-filesystems[1339]: Found sda1 Aug 13 01:22:18.437786 extend-filesystems[1339]: Found sda2 Aug 13 01:22:18.437923 extend-filesystems[1339]: Found sda3 Aug 13 01:22:18.438056 extend-filesystems[1339]: Found usr Aug 13 01:22:18.438223 extend-filesystems[1339]: Found sda4 Aug 13 01:22:18.438369 extend-filesystems[1339]: Found sda6 Aug 13 01:22:18.438617 extend-filesystems[1339]: Found sda7 Aug 13 01:22:18.439858 extend-filesystems[1339]: Found sda9 Aug 13 01:22:18.440002 extend-filesystems[1339]: Checking size of /dev/sda9 Aug 13 01:22:18.443358 systemd[1]: Started vmtoolsd.service. Aug 13 01:22:18.460753 tar[1356]: linux-amd64/helm Aug 13 01:22:18.460921 bash[1378]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:22:18.457463 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:22:18.457601 systemd[1]: Finished motdgen.service. Aug 13 01:22:18.469828 systemd[1]: Finished update-ssh-keys-after-ignition.service. Aug 13 01:22:18.479698 dbus-daemon[1336]: [system] SELinux support is enabled Aug 13 01:22:18.479791 systemd[1]: Started dbus.service. Aug 13 01:22:18.481053 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:22:18.481090 systemd[1]: Reached target system-config.target. Aug 13 01:22:18.481232 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:22:18.481242 systemd[1]: Reached target user-config.target. Aug 13 01:22:18.491877 extend-filesystems[1339]: Old size kept for /dev/sda9 Aug 13 01:22:18.492200 extend-filesystems[1339]: Found sr0 Aug 13 01:22:18.499472 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:22:18.499604 systemd[1]: Finished extend-filesystems.service. Aug 13 01:22:18.517904 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:22:18.518337 update_engine[1349]: I0813 01:22:18.517858 1349 main.cc:92] Flatcar Update Engine starting Aug 13 01:22:18.521985 systemd[1]: Started update-engine.service. Aug 13 01:22:18.523252 systemd[1]: Started locksmithd.service. Aug 13 01:22:18.523555 update_engine[1349]: I0813 01:22:18.523481 1349 update_check_scheduler.cc:74] Next update check in 6m40s Aug 13 01:22:18.542939 systemd-logind[1346]: Watching system buttons on /dev/input/event1 (Power Button) Aug 13 01:22:18.542957 systemd-logind[1346]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:22:18.543067 systemd-logind[1346]: New seat seat0. Aug 13 01:22:18.544285 systemd[1]: Started systemd-logind.service. Aug 13 01:22:18.545505 env[1382]: time="2025-08-13T01:22:18.545319877Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Aug 13 01:22:18.567127 env[1382]: time="2025-08-13T01:22:18.567104352Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 01:22:18.567404 env[1382]: time="2025-08-13T01:22:18.567392953Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568124990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.189-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568141166Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568254949Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568265373Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568272932Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568278370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568320603Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568444072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568529825Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 01:22:18.568659 env[1382]: time="2025-08-13T01:22:18.568540255Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 01:22:18.568863 env[1382]: time="2025-08-13T01:22:18.568569679Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 13 01:22:18.568863 env[1382]: time="2025-08-13T01:22:18.568577770Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573850619Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573867708Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573875677Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573892322Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573901059Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573908743Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573915926Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.573991 env[1382]: time="2025-08-13T01:22:18.573923230Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.573934348Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574181174Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574189674Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574197082Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574253517Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574300850Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574472124Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574488453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574496724Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574527268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574536908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574548373Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574557332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.575808 env[1382]: time="2025-08-13T01:22:18.574564172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.575500 systemd[1]: Started containerd.service. Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574570662Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574578591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574585146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574593196Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574673585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574682702Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574698390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574705808Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574713852Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574719460Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574730598Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Aug 13 01:22:18.576095 env[1382]: time="2025-08-13T01:22:18.574751596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 01:22:18.578361 env[1382]: time="2025-08-13T01:22:18.574865619Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 01:22:18.578361 env[1382]: time="2025-08-13T01:22:18.574897180Z" level=info msg="Connect containerd service" Aug 13 01:22:18.578361 env[1382]: time="2025-08-13T01:22:18.574919864Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 01:22:18.578361 env[1382]: time="2025-08-13T01:22:18.575242236Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:22:18.578361 env[1382]: time="2025-08-13T01:22:18.575388021Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:22:18.578361 env[1382]: time="2025-08-13T01:22:18.575416746Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:22:18.578361 env[1382]: time="2025-08-13T01:22:18.575445095Z" level=info msg="containerd successfully booted in 0.044278s" Aug 13 01:22:18.584843 env[1382]: time="2025-08-13T01:22:18.584817045Z" level=info msg="Start subscribing containerd event" Aug 13 01:22:18.584913 env[1382]: time="2025-08-13T01:22:18.584902915Z" level=info msg="Start recovering state" Aug 13 01:22:18.584994 env[1382]: time="2025-08-13T01:22:18.584985262Z" level=info msg="Start event monitor" Aug 13 01:22:18.585039 env[1382]: time="2025-08-13T01:22:18.585029647Z" level=info msg="Start snapshots syncer" Aug 13 01:22:18.585083 env[1382]: time="2025-08-13T01:22:18.585073633Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:22:18.585124 env[1382]: time="2025-08-13T01:22:18.585114890Z" level=info msg="Start streaming server" Aug 13 01:22:18.657889 systemd-networkd[1112]: ens192: Gained IPv6LL Aug 13 01:22:18.659153 systemd[1]: Finished systemd-networkd-wait-online.service. Aug 13 01:22:18.659435 systemd[1]: Reached target network-online.target. Aug 13 01:22:18.660754 systemd[1]: Starting kubelet.service... Aug 13 01:22:18.848796 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:22:18.848840 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:22:18.899433 locksmithd[1405]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:22:18.980441 tar[1356]: linux-amd64/LICENSE Aug 13 01:22:18.980441 tar[1356]: linux-amd64/README.md Aug 13 01:22:18.983083 systemd[1]: Finished prepare-helm.service. Aug 13 01:22:19.389558 sshd_keygen[1368]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:22:19.403006 systemd[1]: Finished sshd-keygen.service. Aug 13 01:22:19.404204 systemd[1]: Starting issuegen.service... Aug 13 01:22:19.407628 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:22:19.407756 systemd[1]: Finished issuegen.service. Aug 13 01:22:19.408856 systemd[1]: Starting systemd-user-sessions.service... Aug 13 01:22:19.413064 systemd[1]: Finished systemd-user-sessions.service. Aug 13 01:22:19.413905 systemd[1]: Started getty@tty1.service. Aug 13 01:22:19.414657 systemd[1]: Started serial-getty@ttyS0.service. Aug 13 01:22:19.414860 systemd[1]: Reached target getty.target. Aug 13 01:22:19.806130 systemd[1]: Started kubelet.service. Aug 13 01:22:19.806522 systemd[1]: Reached target multi-user.target. Aug 13 01:22:19.807714 systemd[1]: Starting systemd-update-utmp-runlevel.service... Aug 13 01:22:19.813797 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Aug 13 01:22:19.813919 systemd[1]: Finished systemd-update-utmp-runlevel.service. Aug 13 01:22:19.816489 systemd[1]: Startup finished in 5.497s (kernel) + 4.812s (userspace) = 10.309s. Aug 13 01:22:19.836656 login[1483]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 01:22:19.838023 login[1484]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 01:22:19.844964 systemd[1]: Created slice user-500.slice. Aug 13 01:22:19.845594 systemd[1]: Starting user-runtime-dir@500.service... Aug 13 01:22:19.849071 systemd-logind[1346]: New session 1 of user core. Aug 13 01:22:19.851338 systemd-logind[1346]: New session 2 of user core. Aug 13 01:22:19.854208 systemd[1]: Finished user-runtime-dir@500.service. Aug 13 01:22:19.854917 systemd[1]: Starting user@500.service... Aug 13 01:22:19.857016 (systemd)[1495]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:19.905708 systemd[1495]: Queued start job for default target default.target. Aug 13 01:22:19.905860 systemd[1495]: Reached target paths.target. Aug 13 01:22:19.905872 systemd[1495]: Reached target sockets.target. Aug 13 01:22:19.905879 systemd[1495]: Reached target timers.target. Aug 13 01:22:19.905895 systemd[1495]: Reached target basic.target. Aug 13 01:22:19.905928 systemd[1495]: Reached target default.target. Aug 13 01:22:19.905946 systemd[1495]: Startup finished in 45ms. Aug 13 01:22:19.905968 systemd[1]: Started user@500.service. Aug 13 01:22:19.906600 systemd[1]: Started session-1.scope. Aug 13 01:22:19.906974 systemd[1]: Started session-2.scope. Aug 13 01:22:20.352922 kubelet[1490]: E0813 01:22:20.352891 1490 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:22:20.354270 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:22:20.354380 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:22:30.567741 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:22:30.567895 systemd[1]: Stopped kubelet.service. Aug 13 01:22:30.569218 systemd[1]: Starting kubelet.service... Aug 13 01:22:30.790102 systemd[1]: Started kubelet.service. Aug 13 01:22:30.833892 kubelet[1531]: E0813 01:22:30.833829 1531 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:22:30.836372 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:22:30.836482 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:22:41.067675 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:22:41.067844 systemd[1]: Stopped kubelet.service. Aug 13 01:22:41.069126 systemd[1]: Starting kubelet.service... Aug 13 01:22:41.340498 systemd[1]: Started kubelet.service. Aug 13 01:22:41.404500 kubelet[1546]: E0813 01:22:41.404470 1546 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:22:41.405506 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:22:41.405590 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:22:48.581337 systemd[1]: Created slice system-sshd.slice. Aug 13 01:22:48.582252 systemd[1]: Started sshd@0-139.178.70.100:22-139.178.68.195:47170.service. Aug 13 01:22:48.626581 sshd[1553]: Accepted publickey for core from 139.178.68.195 port 47170 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:22:48.627436 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:48.631010 systemd[1]: Started session-3.scope. Aug 13 01:22:48.631847 systemd-logind[1346]: New session 3 of user core. Aug 13 01:22:48.679715 systemd[1]: Started sshd@1-139.178.70.100:22-139.178.68.195:47178.service. Aug 13 01:22:48.712835 sshd[1558]: Accepted publickey for core from 139.178.68.195 port 47178 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:22:48.713515 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:48.715967 systemd-logind[1346]: New session 4 of user core. Aug 13 01:22:48.716177 systemd[1]: Started session-4.scope. Aug 13 01:22:48.768347 sshd[1558]: pam_unix(sshd:session): session closed for user core Aug 13 01:22:48.768566 systemd[1]: Started sshd@2-139.178.70.100:22-139.178.68.195:47182.service. Aug 13 01:22:48.771394 systemd[1]: sshd@1-139.178.70.100:22-139.178.68.195:47178.service: Deactivated successfully. Aug 13 01:22:48.772356 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:22:48.772724 systemd-logind[1346]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:22:48.773614 systemd-logind[1346]: Removed session 4. Aug 13 01:22:48.801431 sshd[1563]: Accepted publickey for core from 139.178.68.195 port 47182 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:22:48.802453 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:48.805748 systemd[1]: Started session-5.scope. Aug 13 01:22:48.805968 systemd-logind[1346]: New session 5 of user core. Aug 13 01:22:48.854275 sshd[1563]: pam_unix(sshd:session): session closed for user core Aug 13 01:22:48.856110 systemd[1]: Started sshd@3-139.178.70.100:22-139.178.68.195:47196.service. Aug 13 01:22:48.859812 systemd[1]: sshd@2-139.178.70.100:22-139.178.68.195:47182.service: Deactivated successfully. Aug 13 01:22:48.861386 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:22:48.861772 systemd-logind[1346]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:22:48.862767 systemd-logind[1346]: Removed session 5. Aug 13 01:22:48.890331 sshd[1570]: Accepted publickey for core from 139.178.68.195 port 47196 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:22:48.891351 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:48.894586 systemd[1]: Started session-6.scope. Aug 13 01:22:48.894924 systemd-logind[1346]: New session 6 of user core. Aug 13 01:22:48.946477 sshd[1570]: pam_unix(sshd:session): session closed for user core Aug 13 01:22:48.948286 systemd[1]: Started sshd@4-139.178.70.100:22-139.178.68.195:47212.service. Aug 13 01:22:48.950031 systemd[1]: sshd@3-139.178.70.100:22-139.178.68.195:47196.service: Deactivated successfully. Aug 13 01:22:48.950973 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:22:48.951336 systemd-logind[1346]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:22:48.952258 systemd-logind[1346]: Removed session 6. Aug 13 01:22:48.981180 sshd[1577]: Accepted publickey for core from 139.178.68.195 port 47212 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:22:48.982208 sshd[1577]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:48.985390 systemd[1]: Started session-7.scope. Aug 13 01:22:48.985727 systemd-logind[1346]: New session 7 of user core. Aug 13 01:22:49.046876 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:22:49.047051 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:22:49.053761 dbus-daemon[1336]: Ѝ})\xaeU: received setenforce notice (enforcing=-992737760) Aug 13 01:22:49.054144 sudo[1583]: pam_unix(sudo:session): session closed for user root Aug 13 01:22:49.055837 sshd[1577]: pam_unix(sshd:session): session closed for user core Aug 13 01:22:49.057652 systemd[1]: Started sshd@5-139.178.70.100:22-139.178.68.195:47218.service. Aug 13 01:22:49.059615 systemd-logind[1346]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:22:49.059704 systemd[1]: sshd@4-139.178.70.100:22-139.178.68.195:47212.service: Deactivated successfully. Aug 13 01:22:49.060219 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:22:49.060589 systemd-logind[1346]: Removed session 7. Aug 13 01:22:49.092482 sshd[1585]: Accepted publickey for core from 139.178.68.195 port 47218 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:22:49.093510 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:49.096556 systemd-logind[1346]: New session 8 of user core. Aug 13 01:22:49.096876 systemd[1]: Started session-8.scope. Aug 13 01:22:49.147160 sudo[1592]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:22:49.147323 sudo[1592]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:22:49.149669 sudo[1592]: pam_unix(sudo:session): session closed for user root Aug 13 01:22:49.153048 sudo[1591]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 01:22:49.153404 sudo[1591]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:22:49.160187 systemd[1]: Stopping audit-rules.service... Aug 13 01:22:49.161718 kernel: kauditd_printk_skb: 201 callbacks suppressed Aug 13 01:22:49.161766 kernel: audit: type=1305 audit(1755048169.160:161): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 01:22:49.160000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Aug 13 01:22:49.161931 auditctl[1595]: No rules Aug 13 01:22:49.162226 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:22:49.162371 systemd[1]: Stopped audit-rules.service. Aug 13 01:22:49.163604 systemd[1]: Starting audit-rules.service... Aug 13 01:22:49.160000 audit[1595]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1263c9a0 a2=420 a3=0 items=0 ppid=1 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.169273 kernel: audit: type=1300 audit(1755048169.160:161): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1263c9a0 a2=420 a3=0 items=0 ppid=1 pid=1595 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.160000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Aug 13 01:22:49.170776 kernel: audit: type=1327 audit(1755048169.160:161): proctitle=2F7362696E2F617564697463746C002D44 Aug 13 01:22:49.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.173908 kernel: audit: type=1131 audit(1755048169.161:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.180114 augenrules[1613]: No rules Aug 13 01:22:49.180504 systemd[1]: Finished audit-rules.service. Aug 13 01:22:49.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.180887 sudo[1591]: pam_unix(sudo:session): session closed for user root Aug 13 01:22:49.183462 systemd[1]: Started sshd@6-139.178.70.100:22-139.178.68.195:47234.service. Aug 13 01:22:49.184332 kernel: audit: type=1130 audit(1755048169.179:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.185798 sshd[1585]: pam_unix(sshd:session): session closed for user core Aug 13 01:22:49.180000 audit[1591]: USER_END pid=1591 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.188698 kernel: audit: type=1106 audit(1755048169.180:164): pid=1591 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.188728 kernel: audit: type=1104 audit(1755048169.180:165): pid=1591 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.180000 audit[1591]: CRED_DISP pid=1591 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.100:22-139.178.68.195:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.194074 kernel: audit: type=1130 audit(1755048169.180:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.100:22-139.178.68.195:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.193000 audit[1585]: USER_END pid=1585 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.194822 systemd[1]: sshd@5-139.178.70.100:22-139.178.68.195:47218.service: Deactivated successfully. Aug 13 01:22:49.195197 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:22:49.195850 systemd-logind[1346]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:22:49.196337 systemd-logind[1346]: Removed session 8. Aug 13 01:22:49.198090 kernel: audit: type=1106 audit(1755048169.193:167): pid=1585 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.193000 audit[1585]: CRED_DISP pid=1585 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.200935 kernel: audit: type=1104 audit(1755048169.193:168): pid=1585 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.100:22-139.178.68.195:47218 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.218849 sshd[1618]: Accepted publickey for core from 139.178.68.195 port 47234 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:22:49.218000 audit[1618]: USER_ACCT pid=1618 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.218000 audit[1618]: CRED_ACQ pid=1618 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.219000 audit[1618]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd7146e2b0 a2=3 a3=0 items=0 ppid=1 pid=1618 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.219000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:22:49.220053 sshd[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:22:49.222943 systemd[1]: Started session-9.scope. Aug 13 01:22:49.223662 systemd-logind[1346]: New session 9 of user core. Aug 13 01:22:49.226000 audit[1618]: USER_START pid=1618 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.226000 audit[1623]: CRED_ACQ pid=1623 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:22:49.272000 audit[1624]: USER_ACCT pid=1624 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.273631 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:22:49.273826 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 13 01:22:49.273000 audit[1624]: CRED_REFR pid=1624 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.274000 audit[1624]: USER_START pid=1624 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.290588 systemd[1]: Starting docker.service... Aug 13 01:22:49.317980 env[1634]: time="2025-08-13T01:22:49.317956385Z" level=info msg="Starting up" Aug 13 01:22:49.319170 env[1634]: time="2025-08-13T01:22:49.319155218Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:22:49.319170 env[1634]: time="2025-08-13T01:22:49.319167623Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:22:49.319217 env[1634]: time="2025-08-13T01:22:49.319178932Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:22:49.319217 env[1634]: time="2025-08-13T01:22:49.319184706Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:22:49.320028 env[1634]: time="2025-08-13T01:22:49.320015785Z" level=info msg="parsed scheme: \"unix\"" module=grpc Aug 13 01:22:49.320028 env[1634]: time="2025-08-13T01:22:49.320026190Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Aug 13 01:22:49.320076 env[1634]: time="2025-08-13T01:22:49.320033500Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Aug 13 01:22:49.320076 env[1634]: time="2025-08-13T01:22:49.320037814Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Aug 13 01:22:49.333600 env[1634]: time="2025-08-13T01:22:49.333582221Z" level=warning msg="Your kernel does not support cgroup blkio weight" Aug 13 01:22:49.333600 env[1634]: time="2025-08-13T01:22:49.333592876Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Aug 13 01:22:49.333694 env[1634]: time="2025-08-13T01:22:49.333676545Z" level=info msg="Loading containers: start." Aug 13 01:22:49.375000 audit[1664]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1664 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.375000 audit[1664]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffce2157040 a2=0 a3=7ffce215702c items=0 ppid=1634 pid=1664 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.375000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Aug 13 01:22:49.377000 audit[1666]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1666 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.377000 audit[1666]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffdcbd6cdd0 a2=0 a3=7ffdcbd6cdbc items=0 ppid=1634 pid=1666 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.377000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Aug 13 01:22:49.378000 audit[1668]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1668 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.378000 audit[1668]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffefbd25800 a2=0 a3=7ffefbd257ec items=0 ppid=1634 pid=1668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.378000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 01:22:49.379000 audit[1670]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1670 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.379000 audit[1670]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7ffd2bc5b2b0 a2=0 a3=7ffd2bc5b29c items=0 ppid=1634 pid=1670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.379000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 01:22:49.380000 audit[1672]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1672 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.380000 audit[1672]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe0a98acb0 a2=0 a3=7ffe0a98ac9c items=0 ppid=1634 pid=1672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.380000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Aug 13 01:22:49.393000 audit[1677]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1677 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.393000 audit[1677]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc5784d470 a2=0 a3=7ffc5784d45c items=0 ppid=1634 pid=1677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.393000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Aug 13 01:22:49.396000 audit[1680]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1680 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.396000 audit[1680]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd5359dc0 a2=0 a3=7fffd5359dac items=0 ppid=1634 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.396000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Aug 13 01:22:49.398000 audit[1682]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1682 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.398000 audit[1682]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7ffe0f865660 a2=0 a3=7ffe0f86564c items=0 ppid=1634 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.398000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Aug 13 01:22:49.400000 audit[1684]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.400000 audit[1684]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd1a733360 a2=0 a3=7ffd1a73334c items=0 ppid=1634 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.400000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:22:49.403000 audit[1688]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1688 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.403000 audit[1688]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffec65591e0 a2=0 a3=7ffec65591cc items=0 ppid=1634 pid=1688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.403000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:22:49.407000 audit[1689]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.407000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe95bcd4c0 a2=0 a3=7ffe95bcd4ac items=0 ppid=1634 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.407000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:22:49.416703 kernel: Initializing XFRM netlink socket Aug 13 01:22:49.439757 env[1634]: time="2025-08-13T01:22:49.439738651Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Aug 13 01:22:49.452000 audit[1697]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1697 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.452000 audit[1697]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7fff5d25e7b0 a2=0 a3=7fff5d25e79c items=0 ppid=1634 pid=1697 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.452000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Aug 13 01:22:49.459000 audit[1700]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.459000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7fff359fcc40 a2=0 a3=7fff359fcc2c items=0 ppid=1634 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.459000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Aug 13 01:22:49.461000 audit[1703]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.461000 audit[1703]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe5ada6930 a2=0 a3=7ffe5ada691c items=0 ppid=1634 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.461000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Aug 13 01:22:49.462000 audit[1705]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.462000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffd2f2670f0 a2=0 a3=7ffd2f2670dc items=0 ppid=1634 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.462000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Aug 13 01:22:49.463000 audit[1707]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1707 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.463000 audit[1707]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffe47016ef0 a2=0 a3=7ffe47016edc items=0 ppid=1634 pid=1707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.463000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Aug 13 01:22:49.465000 audit[1709]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1709 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.465000 audit[1709]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffecd3ad3d0 a2=0 a3=7ffecd3ad3bc items=0 ppid=1634 pid=1709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.465000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Aug 13 01:22:49.466000 audit[1711]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.466000 audit[1711]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffe897903c0 a2=0 a3=7ffe897903ac items=0 ppid=1634 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.466000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Aug 13 01:22:49.471000 audit[1714]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.471000 audit[1714]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd0c35b150 a2=0 a3=7ffd0c35b13c items=0 ppid=1634 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.471000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Aug 13 01:22:49.472000 audit[1716]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.472000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffdf32855b0 a2=0 a3=7ffdf328559c items=0 ppid=1634 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.472000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Aug 13 01:22:49.474000 audit[1718]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1718 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.474000 audit[1718]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7fffa9b32d10 a2=0 a3=7fffa9b32cfc items=0 ppid=1634 pid=1718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.474000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Aug 13 01:22:49.475000 audit[1720]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1720 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.475000 audit[1720]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd733a3a50 a2=0 a3=7ffd733a3a3c items=0 ppid=1634 pid=1720 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.475000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Aug 13 01:22:49.476356 systemd-networkd[1112]: docker0: Link UP Aug 13 01:22:49.479000 audit[1724]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1724 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.479000 audit[1724]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffeb61a5dc0 a2=0 a3=7ffeb61a5dac items=0 ppid=1634 pid=1724 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.479000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:22:49.484000 audit[1725]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:22:49.484000 audit[1725]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffe3b250320 a2=0 a3=7ffe3b25030c items=0 ppid=1634 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:22:49.484000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Aug 13 01:22:49.485547 env[1634]: time="2025-08-13T01:22:49.485516524Z" level=info msg="Loading containers: done." Aug 13 01:22:49.495000 env[1634]: time="2025-08-13T01:22:49.494983738Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:22:49.495168 env[1634]: time="2025-08-13T01:22:49.495156982Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Aug 13 01:22:49.495253 env[1634]: time="2025-08-13T01:22:49.495245402Z" level=info msg="Daemon has completed initialization" Aug 13 01:22:49.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:49.501043 systemd[1]: Started docker.service. Aug 13 01:22:49.505227 env[1634]: time="2025-08-13T01:22:49.505207262Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:22:50.241683 env[1382]: time="2025-08-13T01:22:50.241653767Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:22:50.915292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1773778050.mount: Deactivated successfully. Aug 13 01:22:51.567000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:51.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:51.567568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:22:51.567699 systemd[1]: Stopped kubelet.service. Aug 13 01:22:51.568844 systemd[1]: Starting kubelet.service... Aug 13 01:22:51.626953 systemd[1]: Started kubelet.service. Aug 13 01:22:51.626000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:22:51.757013 kubelet[1766]: E0813 01:22:51.756985 1766 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:22:51.757975 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:22:51.758072 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:22:51.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:22:51.982395 env[1382]: time="2025-08-13T01:22:51.982099364Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:51.983498 env[1382]: time="2025-08-13T01:22:51.983480221Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:51.984436 env[1382]: time="2025-08-13T01:22:51.984424833Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:51.984884 env[1382]: time="2025-08-13T01:22:51.984870678Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:22:51.985212 env[1382]: time="2025-08-13T01:22:51.985199676Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:22:51.985528 env[1382]: time="2025-08-13T01:22:51.985513146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:53.405231 env[1382]: time="2025-08-13T01:22:53.405190801Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:53.418547 env[1382]: time="2025-08-13T01:22:53.418525250Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:53.419553 env[1382]: time="2025-08-13T01:22:53.419532224Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:53.420916 env[1382]: time="2025-08-13T01:22:53.420897500Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:53.421509 env[1382]: time="2025-08-13T01:22:53.421487120Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:22:53.421857 env[1382]: time="2025-08-13T01:22:53.421840504Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:22:54.634818 env[1382]: time="2025-08-13T01:22:54.634780450Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:54.647358 env[1382]: time="2025-08-13T01:22:54.647331006Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:54.658338 env[1382]: time="2025-08-13T01:22:54.658317928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:54.663565 env[1382]: time="2025-08-13T01:22:54.663544101Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:54.664168 env[1382]: time="2025-08-13T01:22:54.664146915Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:22:54.664578 env[1382]: time="2025-08-13T01:22:54.664559874Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:22:55.575643 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3984903861.mount: Deactivated successfully. Aug 13 01:22:55.980497 env[1382]: time="2025-08-13T01:22:55.980276309Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:55.981126 env[1382]: time="2025-08-13T01:22:55.981110329Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:55.981731 env[1382]: time="2025-08-13T01:22:55.981717045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:55.982429 env[1382]: time="2025-08-13T01:22:55.982414650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:55.982760 env[1382]: time="2025-08-13T01:22:55.982744669Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:22:55.983154 env[1382]: time="2025-08-13T01:22:55.983139757Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:22:56.510828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136772197.mount: Deactivated successfully. Aug 13 01:22:57.442022 env[1382]: time="2025-08-13T01:22:57.441986437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:57.459337 env[1382]: time="2025-08-13T01:22:57.459310606Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:57.467362 env[1382]: time="2025-08-13T01:22:57.467338967Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:57.474743 env[1382]: time="2025-08-13T01:22:57.474720867Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:57.475364 env[1382]: time="2025-08-13T01:22:57.475340184Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:22:57.476420 env[1382]: time="2025-08-13T01:22:57.476387550Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:22:58.016365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount835385110.mount: Deactivated successfully. Aug 13 01:22:58.018653 env[1382]: time="2025-08-13T01:22:58.018634347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:58.019542 env[1382]: time="2025-08-13T01:22:58.019527805Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:58.020423 env[1382]: time="2025-08-13T01:22:58.020408222Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:58.023796 env[1382]: time="2025-08-13T01:22:58.023783954Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:22:58.024550 env[1382]: time="2025-08-13T01:22:58.024526842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:22:58.024980 env[1382]: time="2025-08-13T01:22:58.024964261Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:22:58.518535 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200705690.mount: Deactivated successfully. Aug 13 01:23:00.281478 env[1382]: time="2025-08-13T01:23:00.281446995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:00.282228 env[1382]: time="2025-08-13T01:23:00.282213823Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:00.283190 env[1382]: time="2025-08-13T01:23:00.283177383Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:00.284172 env[1382]: time="2025-08-13T01:23:00.284158313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:00.284780 env[1382]: time="2025-08-13T01:23:00.284764997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:23:01.817567 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 01:23:01.817680 systemd[1]: Stopped kubelet.service. Aug 13 01:23:01.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:01.818753 systemd[1]: Starting kubelet.service... Aug 13 01:23:01.821031 kernel: kauditd_printk_skb: 88 callbacks suppressed Aug 13 01:23:01.821091 kernel: audit: type=1130 audit(1755048181.817:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:01.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:01.823998 kernel: audit: type=1131 audit(1755048181.817:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:02.101848 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:23:02.101891 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:23:02.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:23:02.102212 systemd[1]: Stopped kubelet.service. Aug 13 01:23:02.103913 systemd[1]: Starting kubelet.service... Aug 13 01:23:02.105704 kernel: audit: type=1130 audit(1755048182.101:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:23:02.119366 systemd[1]: Reloading. Aug 13 01:23:02.178308 /usr/lib/systemd/system-generators/torcx-generator[1822]: time="2025-08-13T01:23:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:23:02.178530 /usr/lib/systemd/system-generators/torcx-generator[1822]: time="2025-08-13T01:23:02Z" level=info msg="torcx already run" Aug 13 01:23:02.236955 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:23:02.236966 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:23:02.249531 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:23:02.368848 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:23:02.368945 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:23:02.369244 systemd[1]: Stopped kubelet.service. Aug 13 01:23:02.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:23:02.373096 systemd[1]: Starting kubelet.service... Aug 13 01:23:02.373703 kernel: audit: type=1130 audit(1755048182.368:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Aug 13 01:23:02.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:02.919173 systemd[1]: Started kubelet.service. Aug 13 01:23:02.923529 kernel: audit: type=1130 audit(1755048182.918:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:03.001395 kubelet[1897]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:23:03.001614 kubelet[1897]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:23:03.001659 kubelet[1897]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:23:03.001756 kubelet[1897]: I0813 01:23:03.001738 1897 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:23:03.371631 kubelet[1897]: I0813 01:23:03.371617 1897 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:23:03.371722 kubelet[1897]: I0813 01:23:03.371714 1897 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:23:03.371892 kubelet[1897]: I0813 01:23:03.371885 1897 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:23:03.387461 kubelet[1897]: I0813 01:23:03.387453 1897 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:23:03.399865 kubelet[1897]: E0813 01:23:03.399846 1897 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.100:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:03.400551 kubelet[1897]: E0813 01:23:03.400540 1897 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:23:03.400600 kubelet[1897]: I0813 01:23:03.400592 1897 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:23:03.404132 kubelet[1897]: I0813 01:23:03.404123 1897 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:23:03.404887 kubelet[1897]: I0813 01:23:03.404878 1897 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:23:03.405025 kubelet[1897]: I0813 01:23:03.405011 1897 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:23:03.405192 kubelet[1897]: I0813 01:23:03.405077 1897 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 01:23:03.405291 kubelet[1897]: I0813 01:23:03.405283 1897 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:23:03.405334 kubelet[1897]: I0813 01:23:03.405328 1897 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:23:03.405422 kubelet[1897]: I0813 01:23:03.405415 1897 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:23:03.410122 kubelet[1897]: W0813 01:23:03.410092 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:03.410158 kubelet[1897]: E0813 01:23:03.410127 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:03.410378 kubelet[1897]: I0813 01:23:03.410370 1897 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:23:03.410427 kubelet[1897]: I0813 01:23:03.410420 1897 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:23:03.410486 kubelet[1897]: I0813 01:23:03.410480 1897 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:23:03.411975 kubelet[1897]: I0813 01:23:03.411967 1897 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:23:03.415289 kubelet[1897]: W0813 01:23:03.415270 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:03.415349 kubelet[1897]: E0813 01:23:03.415338 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:03.417037 kubelet[1897]: I0813 01:23:03.417029 1897 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:23:03.417319 kubelet[1897]: I0813 01:23:03.417313 1897 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:23:03.417862 kubelet[1897]: W0813 01:23:03.417854 1897 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:23:03.420178 kubelet[1897]: I0813 01:23:03.420171 1897 server.go:1274] "Started kubelet" Aug 13 01:23:03.420304 kubelet[1897]: I0813 01:23:03.420291 1897 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:23:03.420886 kubelet[1897]: I0813 01:23:03.420878 1897 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:23:03.423666 kubelet[1897]: I0813 01:23:03.423653 1897 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:23:03.423836 kubelet[1897]: I0813 01:23:03.423828 1897 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:23:03.424000 audit[1897]: AVC avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:03.425353 kubelet[1897]: I0813 01:23:03.425340 1897 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 01:23:03.425414 kubelet[1897]: I0813 01:23:03.425404 1897 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 01:23:03.425486 kubelet[1897]: I0813 01:23:03.425479 1897 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:23:03.427698 kernel: audit: type=1400 audit(1755048183.424:212): avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:03.427736 kernel: audit: type=1401 audit(1755048183.424:212): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:03.424000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:03.424000 audit[1897]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00070ff20 a1=c000ac48a0 a2=c00070fef0 a3=25 items=0 ppid=1 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.429629 kubelet[1897]: E0813 01:23:03.423940 1897 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.100:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.100:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b2f063073fd4f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-13 01:23:03.420157263 +0000 UTC m=+0.495545039,LastTimestamp:2025-08-13 01:23:03.420157263 +0000 UTC m=+0.495545039,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 13 01:23:03.429761 kubelet[1897]: I0813 01:23:03.429754 1897 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:23:03.431411 kubelet[1897]: I0813 01:23:03.431403 1897 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:23:03.431502 kubelet[1897]: I0813 01:23:03.431494 1897 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:23:03.431565 kubelet[1897]: I0813 01:23:03.431559 1897 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:23:03.431754 kubelet[1897]: W0813 01:23:03.431738 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:03.431812 kubelet[1897]: E0813 01:23:03.431801 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:03.431939 kubelet[1897]: E0813 01:23:03.431930 1897 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 13 01:23:03.432013 kubelet[1897]: E0813 01:23:03.432002 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="200ms" Aug 13 01:23:03.434242 kernel: audit: type=1300 audit(1755048183.424:212): arch=c000003e syscall=188 success=no exit=-22 a0=c00070ff20 a1=c000ac48a0 a2=c00070fef0 a3=25 items=0 ppid=1 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.434275 kernel: audit: type=1327 audit(1755048183.424:212): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:03.424000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:03.424000 audit[1897]: AVC avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:03.438912 kernel: audit: type=1400 audit(1755048183.424:213): avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:03.439617 kubelet[1897]: I0813 01:23:03.439603 1897 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:23:03.439666 kubelet[1897]: I0813 01:23:03.439653 1897 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:23:03.424000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:03.424000 audit[1897]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0006fde40 a1=c000ac48b8 a2=c00070ffb0 a3=25 items=0 ppid=1 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.424000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:03.428000 audit[1909]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.428000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffec140f650 a2=0 a3=7ffec140f63c items=0 ppid=1897 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.428000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 01:23:03.434000 audit[1910]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.434000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5e144420 a2=0 a3=7ffe5e14440c items=0 ppid=1897 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.434000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 01:23:03.435000 audit[1912]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.435000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff2bc74950 a2=0 a3=7fff2bc7493c items=0 ppid=1897 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.435000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:23:03.436000 audit[1914]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.436000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe0b868df0 a2=0 a3=7ffe0b868ddc items=0 ppid=1897 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.436000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:23:03.440778 kubelet[1897]: I0813 01:23:03.440672 1897 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:23:03.450000 audit[1918]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1918 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.450000 audit[1918]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffc3df887d0 a2=0 a3=7ffc3df887bc items=0 ppid=1897 pid=1918 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.450000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Aug 13 01:23:03.451146 kubelet[1897]: I0813 01:23:03.451130 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:23:03.450000 audit[1919]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:03.450000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffea7bbe2e0 a2=0 a3=7ffea7bbe2cc items=0 ppid=1897 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.450000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Aug 13 01:23:03.452291 kubelet[1897]: I0813 01:23:03.452283 1897 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:23:03.452342 kubelet[1897]: I0813 01:23:03.452335 1897 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:23:03.452390 kubelet[1897]: I0813 01:23:03.452383 1897 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:23:03.452461 kubelet[1897]: E0813 01:23:03.452444 1897 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:23:03.452000 audit[1920]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.452000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc3100e030 a2=0 a3=7ffc3100e01c items=0 ppid=1897 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.452000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 01:23:03.453000 audit[1922]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1922 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.453000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2507fd40 a2=0 a3=7fff2507fd2c items=0 ppid=1897 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 01:23:03.453000 audit[1923]: NETFILTER_CFG table=filter:34 family=2 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:03.453000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce78ca780 a2=0 a3=7ffce78ca76c items=0 ppid=1897 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.453000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 01:23:03.454000 audit[1924]: NETFILTER_CFG table=mangle:35 family=10 entries=1 op=nft_register_chain pid=1924 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:03.454000 audit[1924]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc983d6d40 a2=0 a3=7ffc983d6d2c items=0 ppid=1897 pid=1924 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Aug 13 01:23:03.454000 audit[1925]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:03.454000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffe0bd2e760 a2=0 a3=7ffe0bd2e74c items=0 ppid=1897 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.454000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Aug 13 01:23:03.455000 audit[1926]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:03.455000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd13fdd850 a2=0 a3=7ffd13fdd83c items=0 ppid=1897 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Aug 13 01:23:03.456608 kubelet[1897]: W0813 01:23:03.456586 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:03.456668 kubelet[1897]: E0813 01:23:03.456656 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:03.457312 kubelet[1897]: I0813 01:23:03.457299 1897 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:23:03.457312 kubelet[1897]: I0813 01:23:03.457307 1897 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:23:03.457364 kubelet[1897]: I0813 01:23:03.457316 1897 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:23:03.458412 kubelet[1897]: I0813 01:23:03.458399 1897 policy_none.go:49] "None policy: Start" Aug 13 01:23:03.458657 kubelet[1897]: I0813 01:23:03.458646 1897 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:23:03.458657 kubelet[1897]: I0813 01:23:03.458658 1897 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:23:03.461235 kubelet[1897]: I0813 01:23:03.461222 1897 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:23:03.460000 audit[1897]: AVC avc: denied { mac_admin } for pid=1897 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:03.460000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:03.460000 audit[1897]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000a5f2f0 a1=c000ac5c68 a2=c000a5f2c0 a3=25 items=0 ppid=1 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:03.460000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:03.461369 kubelet[1897]: I0813 01:23:03.461262 1897 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 01:23:03.461369 kubelet[1897]: I0813 01:23:03.461310 1897 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:23:03.461369 kubelet[1897]: I0813 01:23:03.461315 1897 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:23:03.462096 kubelet[1897]: I0813 01:23:03.462085 1897 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:23:03.462605 kubelet[1897]: E0813 01:23:03.462593 1897 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 13 01:23:03.522164 update_engine[1349]: I0813 01:23:03.521888 1349 update_attempter.cc:509] Updating boot flags... Aug 13 01:23:03.562915 kubelet[1897]: I0813 01:23:03.562746 1897 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:23:03.563164 kubelet[1897]: E0813 01:23:03.563152 1897 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:23:03.632354 kubelet[1897]: E0813 01:23:03.632298 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="400ms" Aug 13 01:23:03.633607 kubelet[1897]: I0813 01:23:03.633595 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:03.633813 kubelet[1897]: I0813 01:23:03.633768 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebe30c8a4dd92f0dac806d440a262264-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebe30c8a4dd92f0dac806d440a262264\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:23:03.633960 kubelet[1897]: I0813 01:23:03.633949 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:03.634102 kubelet[1897]: I0813 01:23:03.634091 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:03.634207 kubelet[1897]: I0813 01:23:03.634195 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:23:03.634341 kubelet[1897]: I0813 01:23:03.634330 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebe30c8a4dd92f0dac806d440a262264-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebe30c8a4dd92f0dac806d440a262264\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:23:03.634516 kubelet[1897]: I0813 01:23:03.634474 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebe30c8a4dd92f0dac806d440a262264-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ebe30c8a4dd92f0dac806d440a262264\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:23:03.634662 kubelet[1897]: I0813 01:23:03.634614 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:03.634871 kubelet[1897]: I0813 01:23:03.634820 1897 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:03.764973 kubelet[1897]: I0813 01:23:03.764950 1897 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:23:03.765186 kubelet[1897]: E0813 01:23:03.765165 1897 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:23:03.863013 env[1382]: time="2025-08-13T01:23:03.862817144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,}" Aug 13 01:23:03.866193 env[1382]: time="2025-08-13T01:23:03.866168557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,}" Aug 13 01:23:03.867042 env[1382]: time="2025-08-13T01:23:03.867016198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ebe30c8a4dd92f0dac806d440a262264,Namespace:kube-system,Attempt:0,}" Aug 13 01:23:04.033893 kubelet[1897]: E0813 01:23:04.033763 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="800ms" Aug 13 01:23:04.167080 kubelet[1897]: I0813 01:23:04.167061 1897 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:23:04.167422 kubelet[1897]: E0813 01:23:04.167406 1897 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:23:04.223022 kubelet[1897]: W0813 01:23:04.222983 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:04.223164 kubelet[1897]: E0813 01:23:04.223148 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.100:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:04.291164 kubelet[1897]: W0813 01:23:04.291117 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:04.291255 kubelet[1897]: E0813 01:23:04.291241 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.100:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:04.425155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78804579.mount: Deactivated successfully. Aug 13 01:23:04.426875 env[1382]: time="2025-08-13T01:23:04.426858443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.427991 env[1382]: time="2025-08-13T01:23:04.427972994Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.428765 env[1382]: time="2025-08-13T01:23:04.428738380Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.429831 env[1382]: time="2025-08-13T01:23:04.429820878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.431415 env[1382]: time="2025-08-13T01:23:04.431403483Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.432074 env[1382]: time="2025-08-13T01:23:04.432058226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.433537 env[1382]: time="2025-08-13T01:23:04.433523427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.433914 env[1382]: time="2025-08-13T01:23:04.433900897Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.436039 env[1382]: time="2025-08-13T01:23:04.436028477Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.436420 env[1382]: time="2025-08-13T01:23:04.436409405Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.437628 env[1382]: time="2025-08-13T01:23:04.437616152Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.438103 env[1382]: time="2025-08-13T01:23:04.438092811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:04.451835 env[1382]: time="2025-08-13T01:23:04.447257921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:04.451835 env[1382]: time="2025-08-13T01:23:04.447279455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:04.451835 env[1382]: time="2025-08-13T01:23:04.447285919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:04.451835 env[1382]: time="2025-08-13T01:23:04.447373463Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7749a1f938b4e71c6e6efce2bde24ce0fb2f13d93814cbdb9e9cab4a8d6f0b6 pid=1950 runtime=io.containerd.runc.v2 Aug 13 01:23:04.451835 env[1382]: time="2025-08-13T01:23:04.448628461Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:04.451835 env[1382]: time="2025-08-13T01:23:04.448644277Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:04.451835 env[1382]: time="2025-08-13T01:23:04.448650400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:04.452824 env[1382]: time="2025-08-13T01:23:04.452637858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4032819a557d506a56c3ad7c2dffab12bb1c6d666d153be607b4d9402657034 pid=1967 runtime=io.containerd.runc.v2 Aug 13 01:23:04.455565 env[1382]: time="2025-08-13T01:23:04.455529487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:04.455607 env[1382]: time="2025-08-13T01:23:04.455572321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:04.455607 env[1382]: time="2025-08-13T01:23:04.455586804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:04.455675 env[1382]: time="2025-08-13T01:23:04.455658426Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2ff5d8c3c525e5c0c27a9cdf4516b14db871c0647ce86d2c322fbdfbf51de5b0 pid=1988 runtime=io.containerd.runc.v2 Aug 13 01:23:04.474296 kubelet[1897]: W0813 01:23:04.474241 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:04.474296 kubelet[1897]: E0813 01:23:04.474280 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.100:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:04.511398 env[1382]: time="2025-08-13T01:23:04.511373904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ebe30c8a4dd92f0dac806d440a262264,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ff5d8c3c525e5c0c27a9cdf4516b14db871c0647ce86d2c322fbdfbf51de5b0\"" Aug 13 01:23:04.511676 env[1382]: time="2025-08-13T01:23:04.511604870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:407c569889bb86d746b0274843003fd0,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7749a1f938b4e71c6e6efce2bde24ce0fb2f13d93814cbdb9e9cab4a8d6f0b6\"" Aug 13 01:23:04.513381 env[1382]: time="2025-08-13T01:23:04.513367840Z" level=info msg="CreateContainer within sandbox \"2ff5d8c3c525e5c0c27a9cdf4516b14db871c0647ce86d2c322fbdfbf51de5b0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:23:04.513721 env[1382]: time="2025-08-13T01:23:04.513704539Z" level=info msg="CreateContainer within sandbox \"f7749a1f938b4e71c6e6efce2bde24ce0fb2f13d93814cbdb9e9cab4a8d6f0b6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:23:04.527768 env[1382]: time="2025-08-13T01:23:04.527744543Z" level=info msg="CreateContainer within sandbox \"2ff5d8c3c525e5c0c27a9cdf4516b14db871c0647ce86d2c322fbdfbf51de5b0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd5794c5fc5543e0ddd747a60415655921e1db67a8e34330dad44035b007b2c2\"" Aug 13 01:23:04.528119 env[1382]: time="2025-08-13T01:23:04.528100019Z" level=info msg="StartContainer for \"dd5794c5fc5543e0ddd747a60415655921e1db67a8e34330dad44035b007b2c2\"" Aug 13 01:23:04.531606 env[1382]: time="2025-08-13T01:23:04.531582990Z" level=info msg="CreateContainer within sandbox \"f7749a1f938b4e71c6e6efce2bde24ce0fb2f13d93814cbdb9e9cab4a8d6f0b6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ac52f3ee2c9d1abed3e0ce92e3cba8676facf3c426a2868ac4b8393edab3067d\"" Aug 13 01:23:04.531804 env[1382]: time="2025-08-13T01:23:04.531786177Z" level=info msg="StartContainer for \"ac52f3ee2c9d1abed3e0ce92e3cba8676facf3c426a2868ac4b8393edab3067d\"" Aug 13 01:23:04.538796 env[1382]: time="2025-08-13T01:23:04.538775750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:27e4a50e94f48ec00f6bd509cb48ed05,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4032819a557d506a56c3ad7c2dffab12bb1c6d666d153be607b4d9402657034\"" Aug 13 01:23:04.540563 env[1382]: time="2025-08-13T01:23:04.540538915Z" level=info msg="CreateContainer within sandbox \"c4032819a557d506a56c3ad7c2dffab12bb1c6d666d153be607b4d9402657034\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:23:04.547312 env[1382]: time="2025-08-13T01:23:04.547259228Z" level=info msg="CreateContainer within sandbox \"c4032819a557d506a56c3ad7c2dffab12bb1c6d666d153be607b4d9402657034\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d173090ba91323da618ac083a86b4b5e4cd6c1e054af33251c6a599d960968f9\"" Aug 13 01:23:04.548129 env[1382]: time="2025-08-13T01:23:04.548112128Z" level=info msg="StartContainer for \"d173090ba91323da618ac083a86b4b5e4cd6c1e054af33251c6a599d960968f9\"" Aug 13 01:23:04.607710 env[1382]: time="2025-08-13T01:23:04.605984899Z" level=info msg="StartContainer for \"dd5794c5fc5543e0ddd747a60415655921e1db67a8e34330dad44035b007b2c2\" returns successfully" Aug 13 01:23:04.612838 env[1382]: time="2025-08-13T01:23:04.612813197Z" level=info msg="StartContainer for \"d173090ba91323da618ac083a86b4b5e4cd6c1e054af33251c6a599d960968f9\" returns successfully" Aug 13 01:23:04.620670 env[1382]: time="2025-08-13T01:23:04.620631306Z" level=info msg="StartContainer for \"ac52f3ee2c9d1abed3e0ce92e3cba8676facf3c426a2868ac4b8393edab3067d\" returns successfully" Aug 13 01:23:04.822913 kubelet[1897]: W0813 01:23:04.822880 1897 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.100:6443: connect: connection refused Aug 13 01:23:04.822995 kubelet[1897]: E0813 01:23:04.822921 1897 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.100:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.100:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:23:04.834392 kubelet[1897]: E0813 01:23:04.834376 1897 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.100:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.100:6443: connect: connection refused" interval="1.6s" Aug 13 01:23:04.969007 kubelet[1897]: I0813 01:23:04.968993 1897 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:23:04.969203 kubelet[1897]: E0813 01:23:04.969190 1897 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.100:6443/api/v1/nodes\": dial tcp 139.178.70.100:6443: connect: connection refused" node="localhost" Aug 13 01:23:06.215576 kubelet[1897]: E0813 01:23:06.215547 1897 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 01:23:06.420958 kubelet[1897]: I0813 01:23:06.420932 1897 apiserver.go:52] "Watching apiserver" Aug 13 01:23:06.432540 kubelet[1897]: I0813 01:23:06.432526 1897 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:23:06.436819 kubelet[1897]: E0813 01:23:06.436804 1897 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 13 01:23:06.557006 kubelet[1897]: E0813 01:23:06.556930 1897 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Aug 13 01:23:06.570838 kubelet[1897]: I0813 01:23:06.570821 1897 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:23:06.577479 kubelet[1897]: I0813 01:23:06.577458 1897 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 01:23:06.577479 kubelet[1897]: E0813 01:23:06.577477 1897 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 13 01:23:06.681992 kubelet[1897]: E0813 01:23:06.681973 1897 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 13 01:23:07.484365 systemd[1]: Reloading. Aug 13 01:23:07.538906 /usr/lib/systemd/system-generators/torcx-generator[2208]: time="2025-08-13T01:23:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.8 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.8 /var/lib/torcx/store]" Aug 13 01:23:07.539144 /usr/lib/systemd/system-generators/torcx-generator[2208]: time="2025-08-13T01:23:07Z" level=info msg="torcx already run" Aug 13 01:23:07.599923 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Aug 13 01:23:07.599938 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Aug 13 01:23:07.612149 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:23:07.668203 kubelet[1897]: I0813 01:23:07.668187 1897 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:23:07.668397 systemd[1]: Stopping kubelet.service... Aug 13 01:23:07.689093 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:23:07.689360 systemd[1]: Stopped kubelet.service. Aug 13 01:23:07.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:07.690371 kernel: kauditd_printk_skb: 43 callbacks suppressed Aug 13 01:23:07.690421 kernel: audit: type=1131 audit(1755048187.688:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:07.691182 systemd[1]: Starting kubelet.service... Aug 13 01:23:08.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:08.697550 systemd[1]: Started kubelet.service. Aug 13 01:23:08.700707 kernel: audit: type=1130 audit(1755048188.697:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:08.794766 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:23:08.795110 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:23:08.795110 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:23:08.795173 kubelet[2284]: I0813 01:23:08.795149 2284 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:23:08.800021 kubelet[2284]: I0813 01:23:08.800007 2284 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:23:08.800021 kubelet[2284]: I0813 01:23:08.800019 2284 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:23:08.800143 kubelet[2284]: I0813 01:23:08.800132 2284 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:23:08.801066 kubelet[2284]: I0813 01:23:08.800804 2284 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:23:08.830589 kubelet[2284]: I0813 01:23:08.830568 2284 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:23:08.835825 kubelet[2284]: E0813 01:23:08.835812 2284 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 01:23:08.835881 kubelet[2284]: I0813 01:23:08.835873 2284 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 01:23:08.848399 kubelet[2284]: I0813 01:23:08.848264 2284 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:23:08.855534 kubelet[2284]: I0813 01:23:08.855519 2284 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:23:08.855618 kubelet[2284]: I0813 01:23:08.855600 2284 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:23:08.855945 kubelet[2284]: I0813 01:23:08.855618 2284 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 01:23:08.855945 kubelet[2284]: I0813 01:23:08.855807 2284 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:23:08.855945 kubelet[2284]: I0813 01:23:08.855818 2284 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:23:08.855945 kubelet[2284]: I0813 01:23:08.855847 2284 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:23:08.855945 kubelet[2284]: I0813 01:23:08.855902 2284 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:23:08.856114 kubelet[2284]: I0813 01:23:08.855911 2284 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:23:08.856114 kubelet[2284]: I0813 01:23:08.855936 2284 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:23:08.856114 kubelet[2284]: I0813 01:23:08.855951 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:23:08.871679 kubelet[2284]: I0813 01:23:08.871671 2284 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Aug 13 01:23:08.871961 kubelet[2284]: I0813 01:23:08.871953 2284 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:23:08.872232 kubelet[2284]: I0813 01:23:08.872225 2284 server.go:1274] "Started kubelet" Aug 13 01:23:08.908065 kubelet[2284]: I0813 01:23:08.908044 2284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:23:08.920930 kubelet[2284]: I0813 01:23:08.920913 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:23:08.921194 kubelet[2284]: I0813 01:23:08.921186 2284 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:23:08.940113 kubelet[2284]: E0813 01:23:08.940103 2284 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:23:08.941000 audit[2284]: AVC avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:08.942140 kubelet[2284]: I0813 01:23:08.942120 2284 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Aug 13 01:23:08.942210 kubelet[2284]: I0813 01:23:08.942196 2284 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Aug 13 01:23:08.942271 kubelet[2284]: I0813 01:23:08.942264 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:23:08.941000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:08.945912 kernel: audit: type=1400 audit(1755048188.941:229): avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:08.945946 kernel: audit: type=1401 audit(1755048188.941:229): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:08.941000 audit[2284]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bdfbc0 a1=c000ba1170 a2=c000bdfb90 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:08.949769 kernel: audit: type=1300 audit(1755048188.941:229): arch=c000003e syscall=188 success=no exit=-22 a0=c000bdfbc0 a1=c000ba1170 a2=c000bdfb90 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:08.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:08.953729 kernel: audit: type=1327 audit(1755048188.941:229): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:08.941000 audit[2284]: AVC avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:08.954546 kubelet[2284]: I0813 01:23:08.954535 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:23:08.957155 kernel: audit: type=1400 audit(1755048188.941:230): avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:08.957184 kernel: audit: type=1401 audit(1755048188.941:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:08.941000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:08.957307 kubelet[2284]: I0813 01:23:08.957300 2284 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:23:08.957933 kernel: audit: type=1300 audit(1755048188.941:230): arch=c000003e syscall=188 success=no exit=-22 a0=c000be8ec0 a1=c000ba1188 a2=c000bdfc50 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:08.941000 audit[2284]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000be8ec0 a1=c000ba1188 a2=c000bdfc50 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:08.941000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:08.961949 kubelet[2284]: I0813 01:23:08.961940 2284 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:23:08.962068 kubelet[2284]: I0813 01:23:08.962061 2284 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:23:08.964714 kernel: audit: type=1327 audit(1755048188.941:230): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:08.966881 kubelet[2284]: I0813 01:23:08.966196 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:23:08.968243 kubelet[2284]: I0813 01:23:08.968229 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:23:08.968278 kubelet[2284]: I0813 01:23:08.968245 2284 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:23:08.968278 kubelet[2284]: I0813 01:23:08.968256 2284 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:23:08.968320 kubelet[2284]: E0813 01:23:08.968281 2284 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:23:08.989657 kubelet[2284]: I0813 01:23:08.989393 2284 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:23:08.989912 kubelet[2284]: I0813 01:23:08.989899 2284 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:23:08.989967 kubelet[2284]: I0813 01:23:08.989952 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:23:08.992457 kubelet[2284]: I0813 01:23:08.991995 2284 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:23:09.050530 kubelet[2284]: I0813 01:23:09.050510 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:23:09.050530 kubelet[2284]: I0813 01:23:09.050523 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:23:09.050636 kubelet[2284]: I0813 01:23:09.050545 2284 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:23:09.050658 kubelet[2284]: I0813 01:23:09.050635 2284 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:23:09.050658 kubelet[2284]: I0813 01:23:09.050642 2284 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:23:09.050658 kubelet[2284]: I0813 01:23:09.050654 2284 policy_none.go:49] "None policy: Start" Aug 13 01:23:09.051299 kubelet[2284]: I0813 01:23:09.051023 2284 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:23:09.051299 kubelet[2284]: I0813 01:23:09.051046 2284 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:23:09.051299 kubelet[2284]: I0813 01:23:09.051126 2284 state_mem.go:75] "Updated machine memory state" Aug 13 01:23:09.051000 audit[2284]: AVC avc: denied { mac_admin } for pid=2284 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:09.051000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Aug 13 01:23:09.051000 audit[2284]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ae1770 a1=c000ae6540 a2=c000ae1740 a3=25 items=0 ppid=1 pid=2284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:09.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Aug 13 01:23:09.052342 kubelet[2284]: I0813 01:23:09.051760 2284 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:23:09.052342 kubelet[2284]: I0813 01:23:09.051799 2284 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Aug 13 01:23:09.052342 kubelet[2284]: I0813 01:23:09.051880 2284 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:23:09.052342 kubelet[2284]: I0813 01:23:09.051886 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:23:09.053388 kubelet[2284]: I0813 01:23:09.053197 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:23:09.163748 kubelet[2284]: I0813 01:23:09.163723 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Aug 13 01:23:09.168265 kubelet[2284]: I0813 01:23:09.168207 2284 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Aug 13 01:23:09.168265 kubelet[2284]: I0813 01:23:09.168249 2284 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Aug 13 01:23:09.263713 kubelet[2284]: I0813 01:23:09.263646 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:09.263713 kubelet[2284]: I0813 01:23:09.263669 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:09.263713 kubelet[2284]: I0813 01:23:09.263682 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:09.263713 kubelet[2284]: I0813 01:23:09.263704 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ebe30c8a4dd92f0dac806d440a262264-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebe30c8a4dd92f0dac806d440a262264\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:23:09.263857 kubelet[2284]: I0813 01:23:09.263724 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ebe30c8a4dd92f0dac806d440a262264-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ebe30c8a4dd92f0dac806d440a262264\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:23:09.263857 kubelet[2284]: I0813 01:23:09.263735 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:09.263857 kubelet[2284]: I0813 01:23:09.263745 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/407c569889bb86d746b0274843003fd0-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"407c569889bb86d746b0274843003fd0\") " pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:09.263857 kubelet[2284]: I0813 01:23:09.263754 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/27e4a50e94f48ec00f6bd509cb48ed05-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"27e4a50e94f48ec00f6bd509cb48ed05\") " pod="kube-system/kube-scheduler-localhost" Aug 13 01:23:09.263857 kubelet[2284]: I0813 01:23:09.263762 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ebe30c8a4dd92f0dac806d440a262264-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ebe30c8a4dd92f0dac806d440a262264\") " pod="kube-system/kube-apiserver-localhost" Aug 13 01:23:09.858928 kubelet[2284]: I0813 01:23:09.858909 2284 apiserver.go:52] "Watching apiserver" Aug 13 01:23:09.862817 kubelet[2284]: I0813 01:23:09.862803 2284 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:23:10.032221 kubelet[2284]: E0813 01:23:10.032198 2284 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 13 01:23:10.032532 kubelet[2284]: E0813 01:23:10.032512 2284 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 13 01:23:10.043071 kubelet[2284]: I0813 01:23:10.043029 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.04301991 podStartE2EDuration="1.04301991s" podCreationTimestamp="2025-08-13 01:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:23:10.039874629 +0000 UTC m=+1.325095497" watchObservedRunningTime="2025-08-13 01:23:10.04301991 +0000 UTC m=+1.328240783" Aug 13 01:23:10.046127 kubelet[2284]: I0813 01:23:10.046107 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.046097917 podStartE2EDuration="1.046097917s" podCreationTimestamp="2025-08-13 01:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:23:10.043001234 +0000 UTC m=+1.328222101" watchObservedRunningTime="2025-08-13 01:23:10.046097917 +0000 UTC m=+1.331318778" Aug 13 01:23:13.850639 kubelet[2284]: I0813 01:23:13.850618 2284 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:23:13.851224 env[1382]: time="2025-08-13T01:23:13.851158807Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:23:13.851507 kubelet[2284]: I0813 01:23:13.851496 2284 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:23:14.829762 kubelet[2284]: I0813 01:23:14.829716 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=5.829702687 podStartE2EDuration="5.829702687s" podCreationTimestamp="2025-08-13 01:23:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:23:10.046369314 +0000 UTC m=+1.331590173" watchObservedRunningTime="2025-08-13 01:23:14.829702687 +0000 UTC m=+6.114923546" Aug 13 01:23:14.904412 kubelet[2284]: I0813 01:23:14.904386 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/588a7f93-d873-4345-a33a-6efb8209fe01-lib-modules\") pod \"kube-proxy-8wrpr\" (UID: \"588a7f93-d873-4345-a33a-6efb8209fe01\") " pod="kube-system/kube-proxy-8wrpr" Aug 13 01:23:14.904412 kubelet[2284]: I0813 01:23:14.904411 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc64m\" (UniqueName: \"kubernetes.io/projected/588a7f93-d873-4345-a33a-6efb8209fe01-kube-api-access-dc64m\") pod \"kube-proxy-8wrpr\" (UID: \"588a7f93-d873-4345-a33a-6efb8209fe01\") " pod="kube-system/kube-proxy-8wrpr" Aug 13 01:23:14.904735 kubelet[2284]: I0813 01:23:14.904457 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/588a7f93-d873-4345-a33a-6efb8209fe01-kube-proxy\") pod \"kube-proxy-8wrpr\" (UID: \"588a7f93-d873-4345-a33a-6efb8209fe01\") " pod="kube-system/kube-proxy-8wrpr" Aug 13 01:23:14.904735 kubelet[2284]: I0813 01:23:14.904475 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/588a7f93-d873-4345-a33a-6efb8209fe01-xtables-lock\") pod \"kube-proxy-8wrpr\" (UID: \"588a7f93-d873-4345-a33a-6efb8209fe01\") " pod="kube-system/kube-proxy-8wrpr" Aug 13 01:23:15.004971 kubelet[2284]: I0813 01:23:15.004941 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdvsq\" (UniqueName: \"kubernetes.io/projected/3c7e10d3-200f-4a55-acf3-8eb506bd85e8-kube-api-access-cdvsq\") pod \"tigera-operator-5bf8dfcb4-cjx9m\" (UID: \"3c7e10d3-200f-4a55-acf3-8eb506bd85e8\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cjx9m" Aug 13 01:23:15.004971 kubelet[2284]: I0813 01:23:15.004982 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c7e10d3-200f-4a55-acf3-8eb506bd85e8-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-cjx9m\" (UID: \"3c7e10d3-200f-4a55-acf3-8eb506bd85e8\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-cjx9m" Aug 13 01:23:15.009364 kubelet[2284]: I0813 01:23:15.009344 2284 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Aug 13 01:23:15.133494 env[1382]: time="2025-08-13T01:23:15.133427765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wrpr,Uid:588a7f93-d873-4345-a33a-6efb8209fe01,Namespace:kube-system,Attempt:0,}" Aug 13 01:23:15.150949 env[1382]: time="2025-08-13T01:23:15.150908238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:15.151063 env[1382]: time="2025-08-13T01:23:15.150936385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:15.151063 env[1382]: time="2025-08-13T01:23:15.150944686Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:15.151163 env[1382]: time="2025-08-13T01:23:15.151085442Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/866472c9b6119087e839f5674a4bc55618239786f39a017026c6dc822ea184e5 pid=2333 runtime=io.containerd.runc.v2 Aug 13 01:23:15.182057 env[1382]: time="2025-08-13T01:23:15.182031439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wrpr,Uid:588a7f93-d873-4345-a33a-6efb8209fe01,Namespace:kube-system,Attempt:0,} returns sandbox id \"866472c9b6119087e839f5674a4bc55618239786f39a017026c6dc822ea184e5\"" Aug 13 01:23:15.184642 env[1382]: time="2025-08-13T01:23:15.184376987Z" level=info msg="CreateContainer within sandbox \"866472c9b6119087e839f5674a4bc55618239786f39a017026c6dc822ea184e5\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:23:15.190012 env[1382]: time="2025-08-13T01:23:15.189993664Z" level=info msg="CreateContainer within sandbox \"866472c9b6119087e839f5674a4bc55618239786f39a017026c6dc822ea184e5\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"02c4b7df1c1e46ac83fee019c228a4d51e3fe366c7cf79d7145d04c54a59968b\"" Aug 13 01:23:15.190920 env[1382]: time="2025-08-13T01:23:15.190837962Z" level=info msg="StartContainer for \"02c4b7df1c1e46ac83fee019c228a4d51e3fe366c7cf79d7145d04c54a59968b\"" Aug 13 01:23:15.223671 env[1382]: time="2025-08-13T01:23:15.223642642Z" level=info msg="StartContainer for \"02c4b7df1c1e46ac83fee019c228a4d51e3fe366c7cf79d7145d04c54a59968b\" returns successfully" Aug 13 01:23:15.250150 env[1382]: time="2025-08-13T01:23:15.250125319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cjx9m,Uid:3c7e10d3-200f-4a55-acf3-8eb506bd85e8,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:23:15.258438 env[1382]: time="2025-08-13T01:23:15.258397320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:15.258509 env[1382]: time="2025-08-13T01:23:15.258453874Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:15.258509 env[1382]: time="2025-08-13T01:23:15.258484290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:15.258750 env[1382]: time="2025-08-13T01:23:15.258651935Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/44199e0cb3ce0289337c8167b45ebe66ab7c82327a369fc79a8e50508f749206 pid=2407 runtime=io.containerd.runc.v2 Aug 13 01:23:15.299623 env[1382]: time="2025-08-13T01:23:15.299596850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-cjx9m,Uid:3c7e10d3-200f-4a55-acf3-8eb506bd85e8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"44199e0cb3ce0289337c8167b45ebe66ab7c82327a369fc79a8e50508f749206\"" Aug 13 01:23:15.300948 env[1382]: time="2025-08-13T01:23:15.300662523Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:23:15.623875 kernel: kauditd_printk_skb: 4 callbacks suppressed Aug 13 01:23:15.623968 kernel: audit: type=1325 audit(1755048195.619:232): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.619000 audit[2477]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.628962 kernel: audit: type=1300 audit(1755048195.619:232): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff832e1da0 a2=0 a3=7fff832e1d8c items=0 ppid=2387 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.619000 audit[2477]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff832e1da0 a2=0 a3=7fff832e1d8c items=0 ppid=2387 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 01:23:15.631415 kernel: audit: type=1327 audit(1755048195.619:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 01:23:15.619000 audit[2478]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.633560 kernel: audit: type=1325 audit(1755048195.619:233): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.619000 audit[2478]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce6c051c0 a2=0 a3=7ffce6c051ac items=0 ppid=2387 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.638704 kernel: audit: type=1300 audit(1755048195.619:233): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffce6c051c0 a2=0 a3=7ffce6c051ac items=0 ppid=2387 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 01:23:15.621000 audit[2479]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.642637 kernel: audit: type=1327 audit(1755048195.619:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 01:23:15.642670 kernel: audit: type=1325 audit(1755048195.621:234): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.621000 audit[2479]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea3f03920 a2=0 a3=7ffea3f0390c items=0 ppid=2387 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.621000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 01:23:15.648305 kernel: audit: type=1300 audit(1755048195.621:234): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffea3f03920 a2=0 a3=7ffea3f0390c items=0 ppid=2387 pid=2479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.648334 kernel: audit: type=1327 audit(1755048195.621:234): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 01:23:15.648350 kernel: audit: type=1325 audit(1755048195.621:235): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.621000 audit[2480]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.621000 audit[2480]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffce222fc40 a2=0 a3=7ffce222fc2c items=0 ppid=2387 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Aug 13 01:23:15.623000 audit[2481]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2481 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.623000 audit[2481]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe30d59970 a2=0 a3=7ffe30d5995c items=0 ppid=2387 pid=2481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Aug 13 01:23:15.623000 audit[2482]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.623000 audit[2482]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd43f085c0 a2=0 a3=7ffd43f085ac items=0 ppid=2387 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Aug 13 01:23:15.730000 audit[2483]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.730000 audit[2483]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd81b547f0 a2=0 a3=7ffd81b547dc items=0 ppid=2387 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.730000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 01:23:15.733000 audit[2485]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2485 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.733000 audit[2485]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffc38d89010 a2=0 a3=7ffc38d88ffc items=0 ppid=2387 pid=2485 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Aug 13 01:23:15.736000 audit[2488]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.736000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffcaef1d530 a2=0 a3=7ffcaef1d51c items=0 ppid=2387 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Aug 13 01:23:15.737000 audit[2489]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2489 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.737000 audit[2489]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe7445de00 a2=0 a3=7ffe7445ddec items=0 ppid=2387 pid=2489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 01:23:15.739000 audit[2491]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2491 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.739000 audit[2491]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe3463d8c0 a2=0 a3=7ffe3463d8ac items=0 ppid=2387 pid=2491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 01:23:15.740000 audit[2492]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.740000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd97e23510 a2=0 a3=7ffd97e234fc items=0 ppid=2387 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 01:23:15.742000 audit[2494]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2494 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.742000 audit[2494]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc391c57a0 a2=0 a3=7ffc391c578c items=0 ppid=2387 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.742000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 01:23:15.745000 audit[2497]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.745000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe2bf3a260 a2=0 a3=7ffe2bf3a24c items=0 ppid=2387 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.745000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Aug 13 01:23:15.746000 audit[2498]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.746000 audit[2498]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff5f40c6d0 a2=0 a3=7fff5f40c6bc items=0 ppid=2387 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 01:23:15.748000 audit[2500]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.748000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd26ebabf0 a2=0 a3=7ffd26ebabdc items=0 ppid=2387 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.748000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 01:23:15.749000 audit[2501]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.749000 audit[2501]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc8d93f150 a2=0 a3=7ffc8d93f13c items=0 ppid=2387 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.749000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 01:23:15.751000 audit[2503]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.751000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdbf3f4e80 a2=0 a3=7ffdbf3f4e6c items=0 ppid=2387 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 01:23:15.754000 audit[2506]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.754000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc6ddd1cc0 a2=0 a3=7ffc6ddd1cac items=0 ppid=2387 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 01:23:15.757000 audit[2509]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.757000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe0da068d0 a2=0 a3=7ffe0da068bc items=0 ppid=2387 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 01:23:15.757000 audit[2510]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.757000 audit[2510]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd45e9b8d0 a2=0 a3=7ffd45e9b8bc items=0 ppid=2387 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 01:23:15.759000 audit[2512]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.759000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffcb4b76620 a2=0 a3=7ffcb4b7660c items=0 ppid=2387 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:23:15.762000 audit[2515]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2515 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.762000 audit[2515]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffea20c6040 a2=0 a3=7ffea20c602c items=0 ppid=2387 pid=2515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:23:15.762000 audit[2516]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.762000 audit[2516]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff0d60aff0 a2=0 a3=7fff0d60afdc items=0 ppid=2387 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.762000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 01:23:15.764000 audit[2518]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Aug 13 01:23:15.764000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffc34250f70 a2=0 a3=7ffc34250f5c items=0 ppid=2387 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 01:23:15.782000 audit[2524]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:15.782000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffec4c5b080 a2=0 a3=7ffec4c5b06c items=0 ppid=2387 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.782000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:15.789000 audit[2524]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:15.789000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffec4c5b080 a2=0 a3=7ffec4c5b06c items=0 ppid=2387 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.789000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:15.790000 audit[2529]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.790000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff71e21050 a2=0 a3=7fff71e2103c items=0 ppid=2387 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Aug 13 01:23:15.791000 audit[2531]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.791000 audit[2531]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd33e1da00 a2=0 a3=7ffd33e1d9ec items=0 ppid=2387 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.791000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Aug 13 01:23:15.793000 audit[2534]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.793000 audit[2534]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe8d0843c0 a2=0 a3=7ffe8d0843ac items=0 ppid=2387 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Aug 13 01:23:15.794000 audit[2535]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.794000 audit[2535]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0bfc5700 a2=0 a3=7ffe0bfc56ec items=0 ppid=2387 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.794000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Aug 13 01:23:15.795000 audit[2537]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.795000 audit[2537]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffffba38350 a2=0 a3=7ffffba3833c items=0 ppid=2387 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Aug 13 01:23:15.796000 audit[2538]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.796000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffed9e67180 a2=0 a3=7ffed9e6716c items=0 ppid=2387 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.796000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Aug 13 01:23:15.797000 audit[2540]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2540 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.797000 audit[2540]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc57066b60 a2=0 a3=7ffc57066b4c items=0 ppid=2387 pid=2540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Aug 13 01:23:15.799000 audit[2543]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.799000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffc76237430 a2=0 a3=7ffc7623741c items=0 ppid=2387 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Aug 13 01:23:15.800000 audit[2544]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.800000 audit[2544]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffdb5631c0 a2=0 a3=7fffdb5631ac items=0 ppid=2387 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.800000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Aug 13 01:23:15.801000 audit[2546]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2546 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.801000 audit[2546]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc0aeb80d0 a2=0 a3=7ffc0aeb80bc items=0 ppid=2387 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Aug 13 01:23:15.802000 audit[2547]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.802000 audit[2547]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffb8951580 a2=0 a3=7fffb895156c items=0 ppid=2387 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Aug 13 01:23:15.804000 audit[2549]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.804000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc4dc60350 a2=0 a3=7ffc4dc6033c items=0 ppid=2387 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.804000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Aug 13 01:23:15.807000 audit[2553]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.807000 audit[2553]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffedf8796d0 a2=0 a3=7ffedf8796bc items=0 ppid=2387 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Aug 13 01:23:15.809000 audit[2556]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.809000 audit[2556]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffddf2a6980 a2=0 a3=7ffddf2a696c items=0 ppid=2387 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Aug 13 01:23:15.810000 audit[2557]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.810000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffd44c5bf0 a2=0 a3=7fffd44c5bdc items=0 ppid=2387 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Aug 13 01:23:15.811000 audit[2559]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.811000 audit[2559]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffd8ed60a70 a2=0 a3=7ffd8ed60a5c items=0 ppid=2387 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:23:15.813000 audit[2562]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.813000 audit[2562]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff0d2b5100 a2=0 a3=7fff0d2b50ec items=0 ppid=2387 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.813000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Aug 13 01:23:15.814000 audit[2563]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.814000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcab425170 a2=0 a3=7ffcab42515c items=0 ppid=2387 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Aug 13 01:23:15.815000 audit[2565]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.815000 audit[2565]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc7720b830 a2=0 a3=7ffc7720b81c items=0 ppid=2387 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.815000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Aug 13 01:23:15.816000 audit[2566]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.816000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffcd62f300 a2=0 a3=7fffcd62f2ec items=0 ppid=2387 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Aug 13 01:23:15.817000 audit[2568]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.817000 audit[2568]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff5879e50 a2=0 a3=7ffff5879e3c items=0 ppid=2387 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:23:15.819000 audit[2571]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2571 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Aug 13 01:23:15.819000 audit[2571]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffdc62b5eb0 a2=0 a3=7ffdc62b5e9c items=0 ppid=2387 pid=2571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.819000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Aug 13 01:23:15.821000 audit[2573]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 01:23:15.821000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffe9d3f02e0 a2=0 a3=7ffe9d3f02cc items=0 ppid=2387 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.821000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:15.822000 audit[2573]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Aug 13 01:23:15.822000 audit[2573]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffe9d3f02e0 a2=0 a3=7ffe9d3f02cc items=0 ppid=2387 pid=2573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:15.822000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:16.020546 systemd[1]: run-containerd-runc-k8s.io-866472c9b6119087e839f5674a4bc55618239786f39a017026c6dc822ea184e5-runc.WtcXch.mount: Deactivated successfully. Aug 13 01:23:16.044290 kubelet[2284]: I0813 01:23:16.044255 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8wrpr" podStartSLOduration=2.044234072 podStartE2EDuration="2.044234072s" podCreationTimestamp="2025-08-13 01:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:23:16.044055576 +0000 UTC m=+7.329276444" watchObservedRunningTime="2025-08-13 01:23:16.044234072 +0000 UTC m=+7.329454935" Aug 13 01:23:16.813730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1799378614.mount: Deactivated successfully. Aug 13 01:23:17.332071 env[1382]: time="2025-08-13T01:23:17.332036179Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:17.332868 env[1382]: time="2025-08-13T01:23:17.332853415Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:17.333844 env[1382]: time="2025-08-13T01:23:17.333830316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:17.334613 env[1382]: time="2025-08-13T01:23:17.334598468Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:17.334942 env[1382]: time="2025-08-13T01:23:17.334926386Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:23:17.337357 env[1382]: time="2025-08-13T01:23:17.337142678Z" level=info msg="CreateContainer within sandbox \"44199e0cb3ce0289337c8167b45ebe66ab7c82327a369fc79a8e50508f749206\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:23:17.345229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3331233300.mount: Deactivated successfully. Aug 13 01:23:17.354897 env[1382]: time="2025-08-13T01:23:17.354881081Z" level=info msg="CreateContainer within sandbox \"44199e0cb3ce0289337c8167b45ebe66ab7c82327a369fc79a8e50508f749206\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5b82c212e505846ec70e0b428a29af1e1d0d96e3e3bef0233f323604b1c327fb\"" Aug 13 01:23:17.355860 env[1382]: time="2025-08-13T01:23:17.355779328Z" level=info msg="StartContainer for \"5b82c212e505846ec70e0b428a29af1e1d0d96e3e3bef0233f323604b1c327fb\"" Aug 13 01:23:17.405237 env[1382]: time="2025-08-13T01:23:17.405205749Z" level=info msg="StartContainer for \"5b82c212e505846ec70e0b428a29af1e1d0d96e3e3bef0233f323604b1c327fb\" returns successfully" Aug 13 01:23:19.740864 kubelet[2284]: I0813 01:23:19.740831 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-cjx9m" podStartSLOduration=3.705556708 podStartE2EDuration="5.740821332s" podCreationTimestamp="2025-08-13 01:23:14 +0000 UTC" firstStartedPulling="2025-08-13 01:23:15.300256311 +0000 UTC m=+6.585477166" lastFinishedPulling="2025-08-13 01:23:17.335520931 +0000 UTC m=+8.620741790" observedRunningTime="2025-08-13 01:23:18.049742686 +0000 UTC m=+9.334963562" watchObservedRunningTime="2025-08-13 01:23:19.740821332 +0000 UTC m=+11.026042194" Aug 13 01:23:22.608316 sudo[1624]: pam_unix(sudo:session): session closed for user root Aug 13 01:23:22.612254 kernel: kauditd_printk_skb: 143 callbacks suppressed Aug 13 01:23:22.612291 kernel: audit: type=1106 audit(1755048202.606:283): pid=1624 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:23:22.606000 audit[1624]: USER_END pid=1624 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:23:22.611000 audit[1624]: CRED_DISP pid=1624 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:23:22.615079 kernel: audit: type=1104 audit(1755048202.611:284): pid=1624 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Aug 13 01:23:22.622655 sshd[1618]: pam_unix(sshd:session): session closed for user core Aug 13 01:23:22.623000 audit[1618]: USER_END pid=1618 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:23:22.623000 audit[1618]: CRED_DISP pid=1618 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:23:22.632465 kernel: audit: type=1106 audit(1755048202.623:285): pid=1618 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:23:22.632504 kernel: audit: type=1104 audit(1755048202.623:286): pid=1618 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:23:22.634746 systemd[1]: sshd@6-139.178.70.100:22-139.178.68.195:47234.service: Deactivated successfully. Aug 13 01:23:22.635665 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:23:22.635898 systemd-logind[1346]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:23:22.636777 systemd-logind[1346]: Removed session 9. Aug 13 01:23:22.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.100:22-139.178.68.195:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:22.642717 kernel: audit: type=1131 audit(1755048202.633:287): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.100:22-139.178.68.195:47234 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:23:23.344000 audit[2657]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:23.344000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb2651c10 a2=0 a3=7fffb2651bfc items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:23.352193 kernel: audit: type=1325 audit(1755048203.344:288): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:23.352222 kernel: audit: type=1300 audit(1755048203.344:288): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fffb2651c10 a2=0 a3=7fffb2651bfc items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:23.344000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:23.354615 kernel: audit: type=1327 audit(1755048203.344:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:23.350000 audit[2657]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:23.356602 kernel: audit: type=1325 audit(1755048203.350:289): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2657 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:23.350000 audit[2657]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb2651c10 a2=0 a3=0 items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:23.366713 kernel: audit: type=1300 audit(1755048203.350:289): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fffb2651c10 a2=0 a3=0 items=0 ppid=2387 pid=2657 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:23.350000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:23.371000 audit[2659]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:23.371000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7fff7712a0f0 a2=0 a3=7fff7712a0dc items=0 ppid=2387 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:23.371000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:23.377000 audit[2659]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2659 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:23.377000 audit[2659]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff7712a0f0 a2=0 a3=0 items=0 ppid=2387 pid=2659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:23.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:26.272000 audit[2661]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:26.272000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffd724cedd0 a2=0 a3=7ffd724cedbc items=0 ppid=2387 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:26.272000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:26.277000 audit[2661]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2661 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:26.277000 audit[2661]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd724cedd0 a2=0 a3=0 items=0 ppid=2387 pid=2661 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:26.277000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:26.311000 audit[2663]: NETFILTER_CFG table=filter:95 family=2 entries=18 op=nft_register_rule pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:26.311000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffc805b9cd0 a2=0 a3=7ffc805b9cbc items=0 ppid=2387 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:26.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:26.315000 audit[2663]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2663 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:26.315000 audit[2663]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc805b9cd0 a2=0 a3=0 items=0 ppid=2387 pid=2663 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:26.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:26.369532 kubelet[2284]: I0813 01:23:26.369500 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56f41b33-8ee9-4a7c-8c63-170de93f6bf5-tigera-ca-bundle\") pod \"calico-typha-56bb99746-wz6fw\" (UID: \"56f41b33-8ee9-4a7c-8c63-170de93f6bf5\") " pod="calico-system/calico-typha-56bb99746-wz6fw" Aug 13 01:23:26.369532 kubelet[2284]: I0813 01:23:26.369531 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/56f41b33-8ee9-4a7c-8c63-170de93f6bf5-typha-certs\") pod \"calico-typha-56bb99746-wz6fw\" (UID: \"56f41b33-8ee9-4a7c-8c63-170de93f6bf5\") " pod="calico-system/calico-typha-56bb99746-wz6fw" Aug 13 01:23:26.369810 kubelet[2284]: I0813 01:23:26.369546 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c2vd\" (UniqueName: \"kubernetes.io/projected/56f41b33-8ee9-4a7c-8c63-170de93f6bf5-kube-api-access-8c2vd\") pod \"calico-typha-56bb99746-wz6fw\" (UID: \"56f41b33-8ee9-4a7c-8c63-170de93f6bf5\") " pod="calico-system/calico-typha-56bb99746-wz6fw" Aug 13 01:23:26.603399 env[1382]: time="2025-08-13T01:23:26.603372807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56bb99746-wz6fw,Uid:56f41b33-8ee9-4a7c-8c63-170de93f6bf5,Namespace:calico-system,Attempt:0,}" Aug 13 01:23:26.639258 env[1382]: time="2025-08-13T01:23:26.639197552Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:26.640321 env[1382]: time="2025-08-13T01:23:26.640300615Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:26.640419 env[1382]: time="2025-08-13T01:23:26.640397400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:26.642353 env[1382]: time="2025-08-13T01:23:26.642317583Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86c62af35dc2adcd09d3ce9e2d1ad5ab8758e417e6b448b729379c0282fdb9ee pid=2673 runtime=io.containerd.runc.v2 Aug 13 01:23:26.671727 kubelet[2284]: I0813 01:23:26.671448 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-cni-net-dir\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671727 kubelet[2284]: I0813 01:23:26.671476 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-cni-log-dir\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671727 kubelet[2284]: I0813 01:23:26.671493 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-policysync\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671727 kubelet[2284]: I0813 01:23:26.671518 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-node-certs\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671727 kubelet[2284]: I0813 01:23:26.671534 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-var-run-calico\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671929 kubelet[2284]: I0813 01:23:26.671547 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-xtables-lock\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671929 kubelet[2284]: I0813 01:23:26.671563 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwtzb\" (UniqueName: \"kubernetes.io/projected/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-kube-api-access-gwtzb\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671929 kubelet[2284]: I0813 01:23:26.671575 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-tigera-ca-bundle\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671929 kubelet[2284]: I0813 01:23:26.671586 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-var-lib-calico\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.671929 kubelet[2284]: I0813 01:23:26.671601 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-flexvol-driver-host\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.672045 kubelet[2284]: I0813 01:23:26.671613 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-cni-bin-dir\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.672045 kubelet[2284]: I0813 01:23:26.671622 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/817ac15d-d3b1-4be9-88ec-ca96aa4bc373-lib-modules\") pod \"calico-node-r9dzz\" (UID: \"817ac15d-d3b1-4be9-88ec-ca96aa4bc373\") " pod="calico-system/calico-node-r9dzz" Aug 13 01:23:26.721350 env[1382]: time="2025-08-13T01:23:26.721327864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-56bb99746-wz6fw,Uid:56f41b33-8ee9-4a7c-8c63-170de93f6bf5,Namespace:calico-system,Attempt:0,} returns sandbox id \"86c62af35dc2adcd09d3ce9e2d1ad5ab8758e417e6b448b729379c0282fdb9ee\"" Aug 13 01:23:26.722416 env[1382]: time="2025-08-13T01:23:26.722404371Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:23:26.774018 kubelet[2284]: E0813 01:23:26.773996 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.774018 kubelet[2284]: W0813 01:23:26.774010 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.774018 kubelet[2284]: E0813 01:23:26.774023 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.776175 kubelet[2284]: E0813 01:23:26.776043 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.776175 kubelet[2284]: W0813 01:23:26.776053 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.776175 kubelet[2284]: E0813 01:23:26.776068 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.776255 kubelet[2284]: E0813 01:23:26.776191 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.776255 kubelet[2284]: W0813 01:23:26.776197 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.776255 kubelet[2284]: E0813 01:23:26.776210 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.776359 kubelet[2284]: E0813 01:23:26.776350 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.776359 kubelet[2284]: W0813 01:23:26.776357 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.776420 kubelet[2284]: E0813 01:23:26.776365 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.776479 kubelet[2284]: E0813 01:23:26.776471 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.776479 kubelet[2284]: W0813 01:23:26.776477 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.776540 kubelet[2284]: E0813 01:23:26.776485 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.776616 kubelet[2284]: E0813 01:23:26.776605 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.776616 kubelet[2284]: W0813 01:23:26.776613 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.776670 kubelet[2284]: E0813 01:23:26.776621 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.776900 kubelet[2284]: E0813 01:23:26.776731 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.776900 kubelet[2284]: W0813 01:23:26.776736 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.776900 kubelet[2284]: E0813 01:23:26.776743 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.776900 kubelet[2284]: E0813 01:23:26.776833 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.776900 kubelet[2284]: W0813 01:23:26.776856 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.776900 kubelet[2284]: E0813 01:23:26.776875 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777016 kubelet[2284]: E0813 01:23:26.776983 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777016 kubelet[2284]: W0813 01:23:26.776988 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777016 kubelet[2284]: E0813 01:23:26.777007 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777107 kubelet[2284]: E0813 01:23:26.777099 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777107 kubelet[2284]: W0813 01:23:26.777105 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777164 kubelet[2284]: E0813 01:23:26.777112 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777237 kubelet[2284]: E0813 01:23:26.777207 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777237 kubelet[2284]: W0813 01:23:26.777214 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777294 kubelet[2284]: E0813 01:23:26.777246 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777356 kubelet[2284]: E0813 01:23:26.777320 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777356 kubelet[2284]: W0813 01:23:26.777328 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777408 kubelet[2284]: E0813 01:23:26.777367 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777431 kubelet[2284]: E0813 01:23:26.777427 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777460 kubelet[2284]: W0813 01:23:26.777432 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777510 kubelet[2284]: E0813 01:23:26.777494 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777545 kubelet[2284]: E0813 01:23:26.777518 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777545 kubelet[2284]: W0813 01:23:26.777524 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777545 kubelet[2284]: E0813 01:23:26.777531 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777724 kubelet[2284]: E0813 01:23:26.777647 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777724 kubelet[2284]: W0813 01:23:26.777653 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777724 kubelet[2284]: E0813 01:23:26.777660 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.777857 kubelet[2284]: E0813 01:23:26.777756 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.777857 kubelet[2284]: W0813 01:23:26.777855 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.777908 kubelet[2284]: E0813 01:23:26.777861 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.778711 kubelet[2284]: E0813 01:23:26.778109 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.778711 kubelet[2284]: W0813 01:23:26.778119 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.778711 kubelet[2284]: E0813 01:23:26.778125 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.778711 kubelet[2284]: E0813 01:23:26.778208 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.778711 kubelet[2284]: W0813 01:23:26.778213 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.778711 kubelet[2284]: E0813 01:23:26.778221 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.778711 kubelet[2284]: E0813 01:23:26.778294 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.778711 kubelet[2284]: W0813 01:23:26.778299 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.778711 kubelet[2284]: E0813 01:23:26.778303 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.778711 kubelet[2284]: E0813 01:23:26.778384 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.778915 kubelet[2284]: W0813 01:23:26.778388 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.778915 kubelet[2284]: E0813 01:23:26.778393 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.778915 kubelet[2284]: E0813 01:23:26.778474 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.778915 kubelet[2284]: W0813 01:23:26.778478 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.778915 kubelet[2284]: E0813 01:23:26.778483 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.779659 kubelet[2284]: E0813 01:23:26.779652 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.779747 kubelet[2284]: W0813 01:23:26.779738 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.779798 kubelet[2284]: E0813 01:23:26.779789 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.950357 kubelet[2284]: E0813 01:23:26.950294 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:26.956789 env[1382]: time="2025-08-13T01:23:26.956767506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r9dzz,Uid:817ac15d-d3b1-4be9-88ec-ca96aa4bc373,Namespace:calico-system,Attempt:0,}" Aug 13 01:23:26.968922 kubelet[2284]: E0813 01:23:26.968911 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.968994 kubelet[2284]: W0813 01:23:26.968983 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.969049 kubelet[2284]: E0813 01:23:26.969040 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.969199 kubelet[2284]: E0813 01:23:26.969193 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.969248 kubelet[2284]: W0813 01:23:26.969239 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.969295 kubelet[2284]: E0813 01:23:26.969287 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.969476 kubelet[2284]: E0813 01:23:26.969468 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.969726 kubelet[2284]: W0813 01:23:26.969717 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.969773 kubelet[2284]: E0813 01:23:26.969765 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.969924 kubelet[2284]: E0813 01:23:26.969917 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.969974 kubelet[2284]: W0813 01:23:26.969966 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.970021 kubelet[2284]: E0813 01:23:26.970013 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.970154 kubelet[2284]: E0813 01:23:26.970148 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.970200 kubelet[2284]: W0813 01:23:26.970191 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.970247 kubelet[2284]: E0813 01:23:26.970238 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.970371 kubelet[2284]: E0813 01:23:26.970365 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.970418 kubelet[2284]: W0813 01:23:26.970410 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.970463 kubelet[2284]: E0813 01:23:26.970456 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.970573 kubelet[2284]: E0813 01:23:26.970567 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.970618 kubelet[2284]: W0813 01:23:26.970610 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.970667 kubelet[2284]: E0813 01:23:26.970660 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.970786 kubelet[2284]: E0813 01:23:26.970781 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.970834 kubelet[2284]: W0813 01:23:26.970826 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.970892 kubelet[2284]: E0813 01:23:26.970871 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.971009 kubelet[2284]: E0813 01:23:26.971003 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.971053 kubelet[2284]: W0813 01:23:26.971045 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.971096 kubelet[2284]: E0813 01:23:26.971089 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.971228 kubelet[2284]: E0813 01:23:26.971222 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.971273 kubelet[2284]: W0813 01:23:26.971265 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.971316 kubelet[2284]: E0813 01:23:26.971308 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.971453 kubelet[2284]: E0813 01:23:26.971447 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.971500 kubelet[2284]: W0813 01:23:26.971491 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.971548 kubelet[2284]: E0813 01:23:26.971540 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.971682 kubelet[2284]: E0813 01:23:26.971676 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.971748 kubelet[2284]: W0813 01:23:26.971739 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.971795 kubelet[2284]: E0813 01:23:26.971787 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.971998 kubelet[2284]: E0813 01:23:26.971991 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.972045 kubelet[2284]: W0813 01:23:26.972037 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.972091 kubelet[2284]: E0813 01:23:26.972083 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.972220 kubelet[2284]: E0813 01:23:26.972214 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.972267 kubelet[2284]: W0813 01:23:26.972259 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.972310 kubelet[2284]: E0813 01:23:26.972302 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.972435 kubelet[2284]: E0813 01:23:26.972429 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.972477 kubelet[2284]: W0813 01:23:26.972470 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.972523 kubelet[2284]: E0813 01:23:26.972515 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.972646 kubelet[2284]: E0813 01:23:26.972641 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.972702 kubelet[2284]: W0813 01:23:26.972685 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.972775 kubelet[2284]: E0813 01:23:26.972744 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.972959 kubelet[2284]: E0813 01:23:26.972953 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.973007 kubelet[2284]: W0813 01:23:26.972999 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.973054 kubelet[2284]: E0813 01:23:26.973046 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.973178 kubelet[2284]: E0813 01:23:26.973173 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.973225 kubelet[2284]: W0813 01:23:26.973218 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.973269 kubelet[2284]: E0813 01:23:26.973261 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.973392 kubelet[2284]: E0813 01:23:26.973386 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.973437 kubelet[2284]: W0813 01:23:26.973430 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.973480 kubelet[2284]: E0813 01:23:26.973473 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.973598 kubelet[2284]: E0813 01:23:26.973593 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.973645 kubelet[2284]: W0813 01:23:26.973637 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.973741 kubelet[2284]: E0813 01:23:26.973683 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.973914 kubelet[2284]: E0813 01:23:26.973907 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.973959 kubelet[2284]: W0813 01:23:26.973951 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.974006 kubelet[2284]: E0813 01:23:26.973999 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.974064 kubelet[2284]: I0813 01:23:26.974055 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/24914c97-a643-4e2a-b954-9959ef2f43e1-socket-dir\") pod \"csi-node-driver-7kfxn\" (UID: \"24914c97-a643-4e2a-b954-9959ef2f43e1\") " pod="calico-system/csi-node-driver-7kfxn" Aug 13 01:23:26.974192 kubelet[2284]: E0813 01:23:26.974186 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.974238 kubelet[2284]: W0813 01:23:26.974230 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.974289 kubelet[2284]: E0813 01:23:26.974281 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.977182 kubelet[2284]: I0813 01:23:26.974339 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/24914c97-a643-4e2a-b954-9959ef2f43e1-varrun\") pod \"csi-node-driver-7kfxn\" (UID: \"24914c97-a643-4e2a-b954-9959ef2f43e1\") " pod="calico-system/csi-node-driver-7kfxn" Aug 13 01:23:26.977182 kubelet[2284]: E0813 01:23:26.974439 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.977182 kubelet[2284]: W0813 01:23:26.974446 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.977182 kubelet[2284]: E0813 01:23:26.974457 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.977182 kubelet[2284]: E0813 01:23:26.974556 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.977182 kubelet[2284]: W0813 01:23:26.974561 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.977182 kubelet[2284]: E0813 01:23:26.974570 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.977182 kubelet[2284]: E0813 01:23:26.974667 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.977182 kubelet[2284]: W0813 01:23:26.974672 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979373 kubelet[2284]: E0813 01:23:26.974681 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979373 kubelet[2284]: I0813 01:23:26.974708 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/24914c97-a643-4e2a-b954-9959ef2f43e1-registration-dir\") pod \"csi-node-driver-7kfxn\" (UID: \"24914c97-a643-4e2a-b954-9959ef2f43e1\") " pod="calico-system/csi-node-driver-7kfxn" Aug 13 01:23:26.979373 kubelet[2284]: E0813 01:23:26.974766 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979373 kubelet[2284]: W0813 01:23:26.974772 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979373 kubelet[2284]: E0813 01:23:26.974782 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979373 kubelet[2284]: E0813 01:23:26.974873 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979373 kubelet[2284]: W0813 01:23:26.974879 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979373 kubelet[2284]: E0813 01:23:26.974886 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979517 kubelet[2284]: I0813 01:23:26.974895 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24914c97-a643-4e2a-b954-9959ef2f43e1-kubelet-dir\") pod \"csi-node-driver-7kfxn\" (UID: \"24914c97-a643-4e2a-b954-9959ef2f43e1\") " pod="calico-system/csi-node-driver-7kfxn" Aug 13 01:23:26.979517 kubelet[2284]: E0813 01:23:26.975077 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979517 kubelet[2284]: W0813 01:23:26.975083 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979517 kubelet[2284]: E0813 01:23:26.975092 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979517 kubelet[2284]: E0813 01:23:26.975185 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979517 kubelet[2284]: W0813 01:23:26.975190 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979517 kubelet[2284]: E0813 01:23:26.975195 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979517 kubelet[2284]: E0813 01:23:26.975290 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979517 kubelet[2284]: W0813 01:23:26.975296 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979669 kubelet[2284]: E0813 01:23:26.975305 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979669 kubelet[2284]: E0813 01:23:26.975394 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979669 kubelet[2284]: W0813 01:23:26.975399 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979669 kubelet[2284]: E0813 01:23:26.975404 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979669 kubelet[2284]: E0813 01:23:26.975495 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979669 kubelet[2284]: W0813 01:23:26.975500 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979669 kubelet[2284]: E0813 01:23:26.975504 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979669 kubelet[2284]: E0813 01:23:26.975592 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979669 kubelet[2284]: W0813 01:23:26.975597 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979669 kubelet[2284]: E0813 01:23:26.975602 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979857 kubelet[2284]: I0813 01:23:26.975615 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bg6cn\" (UniqueName: \"kubernetes.io/projected/24914c97-a643-4e2a-b954-9959ef2f43e1-kube-api-access-bg6cn\") pod \"csi-node-driver-7kfxn\" (UID: \"24914c97-a643-4e2a-b954-9959ef2f43e1\") " pod="calico-system/csi-node-driver-7kfxn" Aug 13 01:23:26.979857 kubelet[2284]: E0813 01:23:26.975733 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979857 kubelet[2284]: W0813 01:23:26.975739 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979857 kubelet[2284]: E0813 01:23:26.975744 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.979857 kubelet[2284]: E0813 01:23:26.975828 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:26.979857 kubelet[2284]: W0813 01:23:26.975833 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:26.979857 kubelet[2284]: E0813 01:23:26.975838 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:26.986912 env[1382]: time="2025-08-13T01:23:26.986863317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:26.986912 env[1382]: time="2025-08-13T01:23:26.986897128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:26.987015 env[1382]: time="2025-08-13T01:23:26.986991913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:26.987728 env[1382]: time="2025-08-13T01:23:26.987192385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412 pid=2784 runtime=io.containerd.runc.v2 Aug 13 01:23:27.026276 env[1382]: time="2025-08-13T01:23:27.026237785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r9dzz,Uid:817ac15d-d3b1-4be9-88ec-ca96aa4bc373,Namespace:calico-system,Attempt:0,} returns sandbox id \"d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412\"" Aug 13 01:23:27.077631 kubelet[2284]: E0813 01:23:27.077523 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.077631 kubelet[2284]: W0813 01:23:27.077546 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.077631 kubelet[2284]: E0813 01:23:27.077562 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.077877 kubelet[2284]: E0813 01:23:27.077834 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.077877 kubelet[2284]: W0813 01:23:27.077841 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.077877 kubelet[2284]: E0813 01:23:27.077851 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.077983 kubelet[2284]: E0813 01:23:27.077964 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.077983 kubelet[2284]: W0813 01:23:27.077974 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078044 kubelet[2284]: E0813 01:23:27.077991 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078094 kubelet[2284]: E0813 01:23:27.078085 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078094 kubelet[2284]: W0813 01:23:27.078092 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078151 kubelet[2284]: E0813 01:23:27.078100 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078196 kubelet[2284]: E0813 01:23:27.078185 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078230 kubelet[2284]: W0813 01:23:27.078198 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078230 kubelet[2284]: E0813 01:23:27.078204 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078330 kubelet[2284]: E0813 01:23:27.078319 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078330 kubelet[2284]: W0813 01:23:27.078326 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078330 kubelet[2284]: E0813 01:23:27.078334 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078445 kubelet[2284]: E0813 01:23:27.078435 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078445 kubelet[2284]: W0813 01:23:27.078442 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078500 kubelet[2284]: E0813 01:23:27.078450 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078561 kubelet[2284]: E0813 01:23:27.078551 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078597 kubelet[2284]: W0813 01:23:27.078566 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078597 kubelet[2284]: E0813 01:23:27.078576 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078683 kubelet[2284]: E0813 01:23:27.078671 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078683 kubelet[2284]: W0813 01:23:27.078683 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078752 kubelet[2284]: E0813 01:23:27.078704 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078822 kubelet[2284]: E0813 01:23:27.078812 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078822 kubelet[2284]: W0813 01:23:27.078819 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078886 kubelet[2284]: E0813 01:23:27.078831 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.078917 kubelet[2284]: E0813 01:23:27.078914 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.078941 kubelet[2284]: W0813 01:23:27.078919 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.078941 kubelet[2284]: E0813 01:23:27.078924 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.079104 kubelet[2284]: E0813 01:23:27.079094 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.079104 kubelet[2284]: W0813 01:23:27.079102 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.079171 kubelet[2284]: E0813 01:23:27.079152 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.079230 kubelet[2284]: E0813 01:23:27.079221 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.079230 kubelet[2284]: W0813 01:23:27.079228 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.079309 kubelet[2284]: E0813 01:23:27.079274 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.079334 kubelet[2284]: E0813 01:23:27.079329 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.079358 kubelet[2284]: W0813 01:23:27.079333 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.079358 kubelet[2284]: E0813 01:23:27.079341 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.079536 kubelet[2284]: E0813 01:23:27.079419 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.079536 kubelet[2284]: W0813 01:23:27.079426 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.079536 kubelet[2284]: E0813 01:23:27.079438 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.079536 kubelet[2284]: E0813 01:23:27.079519 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.079536 kubelet[2284]: W0813 01:23:27.079524 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.079536 kubelet[2284]: E0813 01:23:27.079533 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.080635 kubelet[2284]: E0813 01:23:27.079795 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.080635 kubelet[2284]: W0813 01:23:27.079810 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.080635 kubelet[2284]: E0813 01:23:27.079825 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.080635 kubelet[2284]: E0813 01:23:27.079920 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.080635 kubelet[2284]: W0813 01:23:27.079925 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.080635 kubelet[2284]: E0813 01:23:27.079931 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.080635 kubelet[2284]: E0813 01:23:27.080005 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.080635 kubelet[2284]: W0813 01:23:27.080010 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.080635 kubelet[2284]: E0813 01:23:27.080015 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.080635 kubelet[2284]: E0813 01:23:27.080110 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.080924 kubelet[2284]: W0813 01:23:27.080115 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.080924 kubelet[2284]: E0813 01:23:27.080120 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.080924 kubelet[2284]: E0813 01:23:27.080336 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.080924 kubelet[2284]: W0813 01:23:27.080341 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.080924 kubelet[2284]: E0813 01:23:27.080347 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.080924 kubelet[2284]: E0813 01:23:27.080428 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.080924 kubelet[2284]: W0813 01:23:27.080432 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.080924 kubelet[2284]: E0813 01:23:27.080437 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.080924 kubelet[2284]: E0813 01:23:27.080510 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.080924 kubelet[2284]: W0813 01:23:27.080514 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.081119 kubelet[2284]: E0813 01:23:27.080519 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.081119 kubelet[2284]: E0813 01:23:27.080612 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.081119 kubelet[2284]: W0813 01:23:27.080617 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.081119 kubelet[2284]: E0813 01:23:27.080622 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.083876 kubelet[2284]: E0813 01:23:27.083860 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.083876 kubelet[2284]: W0813 01:23:27.083872 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.083960 kubelet[2284]: E0813 01:23:27.083882 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.092775 kubelet[2284]: E0813 01:23:27.092763 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:27.092860 kubelet[2284]: W0813 01:23:27.092850 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:27.092916 kubelet[2284]: E0813 01:23:27.092905 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:27.328000 audit[2844]: NETFILTER_CFG table=filter:97 family=2 entries=20 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:27.328000 audit[2844]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff645769b0 a2=0 a3=7fff6457699c items=0 ppid=2387 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:27.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:27.332000 audit[2844]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:27.332000 audit[2844]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7fff645769b0 a2=0 a3=0 items=0 ppid=2387 pid=2844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:27.332000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:27.476995 systemd[1]: run-containerd-runc-k8s.io-86c62af35dc2adcd09d3ce9e2d1ad5ab8758e417e6b448b729379c0282fdb9ee-runc.Ixgt0Z.mount: Deactivated successfully. Aug 13 01:23:27.969109 kubelet[2284]: E0813 01:23:27.968782 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:28.194096 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3762789747.mount: Deactivated successfully. Aug 13 01:23:29.273591 env[1382]: time="2025-08-13T01:23:29.273561487Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:29.274667 env[1382]: time="2025-08-13T01:23:29.274649415Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:29.275423 env[1382]: time="2025-08-13T01:23:29.275410609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:29.276197 env[1382]: time="2025-08-13T01:23:29.276185427Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:29.276601 env[1382]: time="2025-08-13T01:23:29.276586657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:23:29.279096 env[1382]: time="2025-08-13T01:23:29.279068198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:23:29.287891 env[1382]: time="2025-08-13T01:23:29.287850413Z" level=info msg="CreateContainer within sandbox \"86c62af35dc2adcd09d3ce9e2d1ad5ab8758e417e6b448b729379c0282fdb9ee\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:23:29.294402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3035536404.mount: Deactivated successfully. Aug 13 01:23:29.296601 env[1382]: time="2025-08-13T01:23:29.296581115Z" level=info msg="CreateContainer within sandbox \"86c62af35dc2adcd09d3ce9e2d1ad5ab8758e417e6b448b729379c0282fdb9ee\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c5c431bab06416d4be39f0e8cc81d9102be1ee48420f50f4fffb22b2b3f29bf0\"" Aug 13 01:23:29.297899 env[1382]: time="2025-08-13T01:23:29.297884170Z" level=info msg="StartContainer for \"c5c431bab06416d4be39f0e8cc81d9102be1ee48420f50f4fffb22b2b3f29bf0\"" Aug 13 01:23:29.349581 env[1382]: time="2025-08-13T01:23:29.349555133Z" level=info msg="StartContainer for \"c5c431bab06416d4be39f0e8cc81d9102be1ee48420f50f4fffb22b2b3f29bf0\" returns successfully" Aug 13 01:23:29.969383 kubelet[2284]: E0813 01:23:29.969346 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:30.094302 kubelet[2284]: E0813 01:23:30.094277 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.094302 kubelet[2284]: W0813 01:23:30.094298 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.094451 kubelet[2284]: E0813 01:23:30.094329 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.094496 kubelet[2284]: E0813 01:23:30.094483 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.094496 kubelet[2284]: W0813 01:23:30.094492 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.094558 kubelet[2284]: E0813 01:23:30.094501 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.094598 kubelet[2284]: E0813 01:23:30.094586 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.094598 kubelet[2284]: W0813 01:23:30.094595 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.094654 kubelet[2284]: E0813 01:23:30.094600 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.094727 kubelet[2284]: E0813 01:23:30.094685 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.094727 kubelet[2284]: W0813 01:23:30.094724 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.094788 kubelet[2284]: E0813 01:23:30.094732 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.094835 kubelet[2284]: E0813 01:23:30.094826 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.094835 kubelet[2284]: W0813 01:23:30.094833 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.094891 kubelet[2284]: E0813 01:23:30.094838 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.094940 kubelet[2284]: E0813 01:23:30.094930 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.094940 kubelet[2284]: W0813 01:23:30.094938 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.094997 kubelet[2284]: E0813 01:23:30.094945 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095030 kubelet[2284]: E0813 01:23:30.095020 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095030 kubelet[2284]: W0813 01:23:30.095026 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095086 kubelet[2284]: E0813 01:23:30.095032 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095124 kubelet[2284]: E0813 01:23:30.095113 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095124 kubelet[2284]: W0813 01:23:30.095121 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095182 kubelet[2284]: E0813 01:23:30.095126 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095232 kubelet[2284]: E0813 01:23:30.095223 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095232 kubelet[2284]: W0813 01:23:30.095230 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095288 kubelet[2284]: E0813 01:23:30.095236 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095325 kubelet[2284]: E0813 01:23:30.095313 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095325 kubelet[2284]: W0813 01:23:30.095321 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095382 kubelet[2284]: E0813 01:23:30.095326 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095420 kubelet[2284]: E0813 01:23:30.095409 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095420 kubelet[2284]: W0813 01:23:30.095417 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095475 kubelet[2284]: E0813 01:23:30.095421 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095509 kubelet[2284]: E0813 01:23:30.095498 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095509 kubelet[2284]: W0813 01:23:30.095506 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095566 kubelet[2284]: E0813 01:23:30.095512 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095606 kubelet[2284]: E0813 01:23:30.095594 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095606 kubelet[2284]: W0813 01:23:30.095604 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095661 kubelet[2284]: E0813 01:23:30.095610 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095739 kubelet[2284]: E0813 01:23:30.095727 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095739 kubelet[2284]: W0813 01:23:30.095738 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095805 kubelet[2284]: E0813 01:23:30.095745 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.095866 kubelet[2284]: E0813 01:23:30.095854 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.095898 kubelet[2284]: W0813 01:23:30.095871 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.095898 kubelet[2284]: E0813 01:23:30.095882 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098290 kubelet[2284]: E0813 01:23:30.098135 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098290 kubelet[2284]: W0813 01:23:30.098143 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098290 kubelet[2284]: E0813 01:23:30.098151 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098290 kubelet[2284]: E0813 01:23:30.098253 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098290 kubelet[2284]: W0813 01:23:30.098258 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098290 kubelet[2284]: E0813 01:23:30.098268 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098433 kubelet[2284]: E0813 01:23:30.098358 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098433 kubelet[2284]: W0813 01:23:30.098364 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098433 kubelet[2284]: E0813 01:23:30.098375 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098512 kubelet[2284]: E0813 01:23:30.098465 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098512 kubelet[2284]: W0813 01:23:30.098470 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098512 kubelet[2284]: E0813 01:23:30.098475 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098569 kubelet[2284]: E0813 01:23:30.098554 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098569 kubelet[2284]: W0813 01:23:30.098559 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098569 kubelet[2284]: E0813 01:23:30.098564 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098665 kubelet[2284]: E0813 01:23:30.098653 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098665 kubelet[2284]: W0813 01:23:30.098662 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098734 kubelet[2284]: E0813 01:23:30.098669 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098882 kubelet[2284]: E0813 01:23:30.098841 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098882 kubelet[2284]: W0813 01:23:30.098847 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098882 kubelet[2284]: E0813 01:23:30.098857 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.098960 kubelet[2284]: E0813 01:23:30.098938 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.098960 kubelet[2284]: W0813 01:23:30.098943 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.098960 kubelet[2284]: E0813 01:23:30.098952 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099052 kubelet[2284]: E0813 01:23:30.099041 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099088 kubelet[2284]: W0813 01:23:30.099050 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.099088 kubelet[2284]: E0813 01:23:30.099062 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099175 kubelet[2284]: E0813 01:23:30.099161 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099175 kubelet[2284]: W0813 01:23:30.099173 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.099234 kubelet[2284]: E0813 01:23:30.099180 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099299 kubelet[2284]: E0813 01:23:30.099287 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099299 kubelet[2284]: W0813 01:23:30.099297 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.099355 kubelet[2284]: E0813 01:23:30.099309 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099500 kubelet[2284]: E0813 01:23:30.099468 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099500 kubelet[2284]: W0813 01:23:30.099474 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.099500 kubelet[2284]: E0813 01:23:30.099481 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099576 kubelet[2284]: E0813 01:23:30.099566 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099576 kubelet[2284]: W0813 01:23:30.099573 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.099628 kubelet[2284]: E0813 01:23:30.099582 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099734 kubelet[2284]: E0813 01:23:30.099686 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099734 kubelet[2284]: W0813 01:23:30.099730 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.099803 kubelet[2284]: E0813 01:23:30.099740 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099850 kubelet[2284]: E0813 01:23:30.099837 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099850 kubelet[2284]: W0813 01:23:30.099848 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.099911 kubelet[2284]: E0813 01:23:30.099855 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.099951 kubelet[2284]: E0813 01:23:30.099939 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.099951 kubelet[2284]: W0813 01:23:30.099948 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.100008 kubelet[2284]: E0813 01:23:30.099953 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.100056 kubelet[2284]: E0813 01:23:30.100047 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.100056 kubelet[2284]: W0813 01:23:30.100054 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.100109 kubelet[2284]: E0813 01:23:30.100059 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.100242 kubelet[2284]: E0813 01:23:30.100232 2284 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:23:30.100275 kubelet[2284]: W0813 01:23:30.100242 2284 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:23:30.100275 kubelet[2284]: E0813 01:23:30.100249 2284 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:23:30.744891 env[1382]: time="2025-08-13T01:23:30.744859188Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:30.749588 env[1382]: time="2025-08-13T01:23:30.749564812Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:30.753719 env[1382]: time="2025-08-13T01:23:30.753400560Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:30.760832 env[1382]: time="2025-08-13T01:23:30.760811175Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:30.761138 env[1382]: time="2025-08-13T01:23:30.761119004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:23:30.763516 env[1382]: time="2025-08-13T01:23:30.763496915Z" level=info msg="CreateContainer within sandbox \"d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:23:30.797971 env[1382]: time="2025-08-13T01:23:30.797939830Z" level=info msg="CreateContainer within sandbox \"d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f9faaaad6c14205155109688d8b46eed64f6103b5c19999ecef498c2619b8cfa\"" Aug 13 01:23:30.798361 env[1382]: time="2025-08-13T01:23:30.798344033Z" level=info msg="StartContainer for \"f9faaaad6c14205155109688d8b46eed64f6103b5c19999ecef498c2619b8cfa\"" Aug 13 01:23:30.822186 systemd[1]: run-containerd-runc-k8s.io-f9faaaad6c14205155109688d8b46eed64f6103b5c19999ecef498c2619b8cfa-runc.1pq0ym.mount: Deactivated successfully. Aug 13 01:23:30.840601 env[1382]: time="2025-08-13T01:23:30.840567739Z" level=info msg="StartContainer for \"f9faaaad6c14205155109688d8b46eed64f6103b5c19999ecef498c2619b8cfa\" returns successfully" Aug 13 01:23:31.059666 kubelet[2284]: I0813 01:23:31.059592 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:23:31.079005 kubelet[2284]: I0813 01:23:31.078968 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-56bb99746-wz6fw" podStartSLOduration=2.523961551 podStartE2EDuration="5.078956128s" podCreationTimestamp="2025-08-13 01:23:26 +0000 UTC" firstStartedPulling="2025-08-13 01:23:26.722185826 +0000 UTC m=+18.007406681" lastFinishedPulling="2025-08-13 01:23:29.277180403 +0000 UTC m=+20.562401258" observedRunningTime="2025-08-13 01:23:30.063900629 +0000 UTC m=+21.349121491" watchObservedRunningTime="2025-08-13 01:23:31.078956128 +0000 UTC m=+22.364176997" Aug 13 01:23:31.249139 env[1382]: time="2025-08-13T01:23:31.249103319Z" level=info msg="shim disconnected" id=f9faaaad6c14205155109688d8b46eed64f6103b5c19999ecef498c2619b8cfa Aug 13 01:23:31.249139 env[1382]: time="2025-08-13T01:23:31.249136647Z" level=warning msg="cleaning up after shim disconnected" id=f9faaaad6c14205155109688d8b46eed64f6103b5c19999ecef498c2619b8cfa namespace=k8s.io Aug 13 01:23:31.249139 env[1382]: time="2025-08-13T01:23:31.249145031Z" level=info msg="cleaning up dead shim" Aug 13 01:23:31.255106 env[1382]: time="2025-08-13T01:23:31.255077394Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:23:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2981 runtime=io.containerd.runc.v2\n" Aug 13 01:23:31.282150 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9faaaad6c14205155109688d8b46eed64f6103b5c19999ecef498c2619b8cfa-rootfs.mount: Deactivated successfully. Aug 13 01:23:31.968553 kubelet[2284]: E0813 01:23:31.968526 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:32.061979 env[1382]: time="2025-08-13T01:23:32.061704962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:23:33.969800 kubelet[2284]: E0813 01:23:33.969368 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:35.749376 env[1382]: time="2025-08-13T01:23:35.749343099Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:35.758579 env[1382]: time="2025-08-13T01:23:35.758560845Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:35.759886 env[1382]: time="2025-08-13T01:23:35.759770845Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:35.760974 env[1382]: time="2025-08-13T01:23:35.760958658Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:35.761517 env[1382]: time="2025-08-13T01:23:35.761497885Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:23:35.764928 env[1382]: time="2025-08-13T01:23:35.764901734Z" level=info msg="CreateContainer within sandbox \"d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:23:35.773207 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount837363954.mount: Deactivated successfully. Aug 13 01:23:35.776064 env[1382]: time="2025-08-13T01:23:35.776024214Z" level=info msg="CreateContainer within sandbox \"d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e322c5e2c55854d1dcdad166821bb0e76254027265b9490d62d202d1e5f0abfa\"" Aug 13 01:23:35.778557 env[1382]: time="2025-08-13T01:23:35.778540045Z" level=info msg="StartContainer for \"e322c5e2c55854d1dcdad166821bb0e76254027265b9490d62d202d1e5f0abfa\"" Aug 13 01:23:35.818792 env[1382]: time="2025-08-13T01:23:35.818763106Z" level=info msg="StartContainer for \"e322c5e2c55854d1dcdad166821bb0e76254027265b9490d62d202d1e5f0abfa\" returns successfully" Aug 13 01:23:35.969133 kubelet[2284]: E0813 01:23:35.969107 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:36.938665 env[1382]: time="2025-08-13T01:23:36.938366632Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:23:36.952856 kubelet[2284]: I0813 01:23:36.950522 2284 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:23:36.957552 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e322c5e2c55854d1dcdad166821bb0e76254027265b9490d62d202d1e5f0abfa-rootfs.mount: Deactivated successfully. Aug 13 01:23:36.962351 env[1382]: time="2025-08-13T01:23:36.962310425Z" level=info msg="shim disconnected" id=e322c5e2c55854d1dcdad166821bb0e76254027265b9490d62d202d1e5f0abfa Aug 13 01:23:36.962423 env[1382]: time="2025-08-13T01:23:36.962351287Z" level=warning msg="cleaning up after shim disconnected" id=e322c5e2c55854d1dcdad166821bb0e76254027265b9490d62d202d1e5f0abfa namespace=k8s.io Aug 13 01:23:36.962423 env[1382]: time="2025-08-13T01:23:36.962357360Z" level=info msg="cleaning up dead shim" Aug 13 01:23:36.969712 env[1382]: time="2025-08-13T01:23:36.969389643Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:23:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3042 runtime=io.containerd.runc.v2\n" Aug 13 01:23:37.064392 kubelet[2284]: I0813 01:23:37.064373 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a621f640-c17c-4f08-9759-f53f10bbc599-calico-apiserver-certs\") pod \"calico-apiserver-67b95cb99-jl8kx\" (UID: \"a621f640-c17c-4f08-9759-f53f10bbc599\") " pod="calico-apiserver/calico-apiserver-67b95cb99-jl8kx" Aug 13 01:23:37.064707 kubelet[2284]: I0813 01:23:37.064695 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdvn6\" (UniqueName: \"kubernetes.io/projected/4c32448d-bd5d-4dff-a3e1-988c6198e659-kube-api-access-hdvn6\") pod \"calico-kube-controllers-65f7979575-9d4tp\" (UID: \"4c32448d-bd5d-4dff-a3e1-988c6198e659\") " pod="calico-system/calico-kube-controllers-65f7979575-9d4tp" Aug 13 01:23:37.064769 kubelet[2284]: I0813 01:23:37.064759 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12930a0e-5b90-4155-97e2-3a62414b20c0-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-d66sc\" (UID: \"12930a0e-5b90-4155-97e2-3a62414b20c0\") " pod="calico-system/goldmane-58fd7646b9-d66sc" Aug 13 01:23:37.064827 kubelet[2284]: I0813 01:23:37.064817 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a40d20da-5325-4fa5-9cd5-94a58f7ee4b0-config-volume\") pod \"coredns-7c65d6cfc9-gg2v2\" (UID: \"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0\") " pod="kube-system/coredns-7c65d6cfc9-gg2v2" Aug 13 01:23:37.064883 kubelet[2284]: I0813 01:23:37.064872 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-ca-bundle\") pod \"whisker-5658fcd68c-wsmfs\" (UID: \"0aa13eb1-cd4b-4065-b579-c9678960e68d\") " pod="calico-system/whisker-5658fcd68c-wsmfs" Aug 13 01:23:37.064940 kubelet[2284]: I0813 01:23:37.064930 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wccw6\" (UniqueName: \"kubernetes.io/projected/0aa13eb1-cd4b-4065-b579-c9678960e68d-kube-api-access-wccw6\") pod \"whisker-5658fcd68c-wsmfs\" (UID: \"0aa13eb1-cd4b-4065-b579-c9678960e68d\") " pod="calico-system/whisker-5658fcd68c-wsmfs" Aug 13 01:23:37.064997 kubelet[2284]: I0813 01:23:37.064988 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w28ct\" (UniqueName: \"kubernetes.io/projected/0722f301-b3c1-4eb6-ad2d-cc09409b96c2-kube-api-access-w28ct\") pod \"coredns-7c65d6cfc9-t8vb4\" (UID: \"0722f301-b3c1-4eb6-ad2d-cc09409b96c2\") " pod="kube-system/coredns-7c65d6cfc9-t8vb4" Aug 13 01:23:37.065052 kubelet[2284]: I0813 01:23:37.065044 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhhhj\" (UniqueName: \"kubernetes.io/projected/a621f640-c17c-4f08-9759-f53f10bbc599-kube-api-access-lhhhj\") pod \"calico-apiserver-67b95cb99-jl8kx\" (UID: \"a621f640-c17c-4f08-9759-f53f10bbc599\") " pod="calico-apiserver/calico-apiserver-67b95cb99-jl8kx" Aug 13 01:23:37.065108 kubelet[2284]: I0813 01:23:37.065099 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/12930a0e-5b90-4155-97e2-3a62414b20c0-goldmane-key-pair\") pod \"goldmane-58fd7646b9-d66sc\" (UID: \"12930a0e-5b90-4155-97e2-3a62414b20c0\") " pod="calico-system/goldmane-58fd7646b9-d66sc" Aug 13 01:23:37.065424 kubelet[2284]: I0813 01:23:37.065200 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0722f301-b3c1-4eb6-ad2d-cc09409b96c2-config-volume\") pod \"coredns-7c65d6cfc9-t8vb4\" (UID: \"0722f301-b3c1-4eb6-ad2d-cc09409b96c2\") " pod="kube-system/coredns-7c65d6cfc9-t8vb4" Aug 13 01:23:37.065424 kubelet[2284]: I0813 01:23:37.065214 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/eeef8dd3-2401-4787-8ede-caf069e52bbf-calico-apiserver-certs\") pod \"calico-apiserver-67b95cb99-9ggd9\" (UID: \"eeef8dd3-2401-4787-8ede-caf069e52bbf\") " pod="calico-apiserver/calico-apiserver-67b95cb99-9ggd9" Aug 13 01:23:37.065424 kubelet[2284]: I0813 01:23:37.065223 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87pqx\" (UniqueName: \"kubernetes.io/projected/eeef8dd3-2401-4787-8ede-caf069e52bbf-kube-api-access-87pqx\") pod \"calico-apiserver-67b95cb99-9ggd9\" (UID: \"eeef8dd3-2401-4787-8ede-caf069e52bbf\") " pod="calico-apiserver/calico-apiserver-67b95cb99-9ggd9" Aug 13 01:23:37.065424 kubelet[2284]: I0813 01:23:37.065235 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4c32448d-bd5d-4dff-a3e1-988c6198e659-tigera-ca-bundle\") pod \"calico-kube-controllers-65f7979575-9d4tp\" (UID: \"4c32448d-bd5d-4dff-a3e1-988c6198e659\") " pod="calico-system/calico-kube-controllers-65f7979575-9d4tp" Aug 13 01:23:37.065424 kubelet[2284]: I0813 01:23:37.065245 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12930a0e-5b90-4155-97e2-3a62414b20c0-config\") pod \"goldmane-58fd7646b9-d66sc\" (UID: \"12930a0e-5b90-4155-97e2-3a62414b20c0\") " pod="calico-system/goldmane-58fd7646b9-d66sc" Aug 13 01:23:37.065529 kubelet[2284]: I0813 01:23:37.065255 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x99rj\" (UniqueName: \"kubernetes.io/projected/12930a0e-5b90-4155-97e2-3a62414b20c0-kube-api-access-x99rj\") pod \"goldmane-58fd7646b9-d66sc\" (UID: \"12930a0e-5b90-4155-97e2-3a62414b20c0\") " pod="calico-system/goldmane-58fd7646b9-d66sc" Aug 13 01:23:37.065529 kubelet[2284]: I0813 01:23:37.065266 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkpbz\" (UniqueName: \"kubernetes.io/projected/a40d20da-5325-4fa5-9cd5-94a58f7ee4b0-kube-api-access-wkpbz\") pod \"coredns-7c65d6cfc9-gg2v2\" (UID: \"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0\") " pod="kube-system/coredns-7c65d6cfc9-gg2v2" Aug 13 01:23:37.065529 kubelet[2284]: I0813 01:23:37.065276 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-backend-key-pair\") pod \"whisker-5658fcd68c-wsmfs\" (UID: \"0aa13eb1-cd4b-4065-b579-c9678960e68d\") " pod="calico-system/whisker-5658fcd68c-wsmfs" Aug 13 01:23:37.079643 env[1382]: time="2025-08-13T01:23:37.079449411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:23:37.352806 env[1382]: time="2025-08-13T01:23:37.352777019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f7979575-9d4tp,Uid:4c32448d-bd5d-4dff-a3e1-988c6198e659,Namespace:calico-system,Attempt:0,}" Aug 13 01:23:37.363125 env[1382]: time="2025-08-13T01:23:37.363007084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8vb4,Uid:0722f301-b3c1-4eb6-ad2d-cc09409b96c2,Namespace:kube-system,Attempt:0,}" Aug 13 01:23:37.368440 env[1382]: time="2025-08-13T01:23:37.368421373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5658fcd68c-wsmfs,Uid:0aa13eb1-cd4b-4065-b579-c9678960e68d,Namespace:calico-system,Attempt:0,}" Aug 13 01:23:37.370925 env[1382]: time="2025-08-13T01:23:37.370910875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-jl8kx,Uid:a621f640-c17c-4f08-9759-f53f10bbc599,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:23:37.371109 env[1382]: time="2025-08-13T01:23:37.371096486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d66sc,Uid:12930a0e-5b90-4155-97e2-3a62414b20c0,Namespace:calico-system,Attempt:0,}" Aug 13 01:23:37.373195 env[1382]: time="2025-08-13T01:23:37.373182183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-9ggd9,Uid:eeef8dd3-2401-4787-8ede-caf069e52bbf,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:23:37.374469 env[1382]: time="2025-08-13T01:23:37.374451757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg2v2,Uid:a40d20da-5325-4fa5-9cd5-94a58f7ee4b0,Namespace:kube-system,Attempt:0,}" Aug 13 01:23:37.614651 env[1382]: time="2025-08-13T01:23:37.614400294Z" level=error msg="Failed to destroy network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.614857 env[1382]: time="2025-08-13T01:23:37.614837815Z" level=error msg="encountered an error cleaning up failed sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.614897 env[1382]: time="2025-08-13T01:23:37.614868482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg2v2,Uid:a40d20da-5325-4fa5-9cd5-94a58f7ee4b0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.618444 kubelet[2284]: E0813 01:23:37.618407 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.619360 kubelet[2284]: E0813 01:23:37.619033 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gg2v2" Aug 13 01:23:37.620534 kubelet[2284]: E0813 01:23:37.620514 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-gg2v2" Aug 13 01:23:37.620582 kubelet[2284]: E0813 01:23:37.620565 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-gg2v2_kube-system(a40d20da-5325-4fa5-9cd5-94a58f7ee4b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-gg2v2_kube-system(a40d20da-5325-4fa5-9cd5-94a58f7ee4b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gg2v2" podUID="a40d20da-5325-4fa5-9cd5-94a58f7ee4b0" Aug 13 01:23:37.631749 env[1382]: time="2025-08-13T01:23:37.631713999Z" level=error msg="Failed to destroy network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.631958 env[1382]: time="2025-08-13T01:23:37.631937842Z" level=error msg="encountered an error cleaning up failed sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.631998 env[1382]: time="2025-08-13T01:23:37.631968566Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d66sc,Uid:12930a0e-5b90-4155-97e2-3a62414b20c0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.632098 kubelet[2284]: E0813 01:23:37.632077 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.632150 kubelet[2284]: E0813 01:23:37.632111 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d66sc" Aug 13 01:23:37.632150 kubelet[2284]: E0813 01:23:37.632127 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-d66sc" Aug 13 01:23:37.632209 kubelet[2284]: E0813 01:23:37.632153 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-d66sc_calico-system(12930a0e-5b90-4155-97e2-3a62414b20c0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-d66sc_calico-system(12930a0e-5b90-4155-97e2-3a62414b20c0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-d66sc" podUID="12930a0e-5b90-4155-97e2-3a62414b20c0" Aug 13 01:23:37.645285 env[1382]: time="2025-08-13T01:23:37.645245094Z" level=error msg="Failed to destroy network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.645491 env[1382]: time="2025-08-13T01:23:37.645472261Z" level=error msg="encountered an error cleaning up failed sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.645537 env[1382]: time="2025-08-13T01:23:37.645499535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-jl8kx,Uid:a621f640-c17c-4f08-9759-f53f10bbc599,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.645644 kubelet[2284]: E0813 01:23:37.645622 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.645697 kubelet[2284]: E0813 01:23:37.645658 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b95cb99-jl8kx" Aug 13 01:23:37.645697 kubelet[2284]: E0813 01:23:37.645673 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b95cb99-jl8kx" Aug 13 01:23:37.645752 kubelet[2284]: E0813 01:23:37.645704 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b95cb99-jl8kx_calico-apiserver(a621f640-c17c-4f08-9759-f53f10bbc599)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b95cb99-jl8kx_calico-apiserver(a621f640-c17c-4f08-9759-f53f10bbc599)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b95cb99-jl8kx" podUID="a621f640-c17c-4f08-9759-f53f10bbc599" Aug 13 01:23:37.654797 env[1382]: time="2025-08-13T01:23:37.654771428Z" level=error msg="Failed to destroy network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.655104 env[1382]: time="2025-08-13T01:23:37.655087021Z" level=error msg="encountered an error cleaning up failed sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.655183 env[1382]: time="2025-08-13T01:23:37.655164341Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f7979575-9d4tp,Uid:4c32448d-bd5d-4dff-a3e1-988c6198e659,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.655368 kubelet[2284]: E0813 01:23:37.655347 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.655416 kubelet[2284]: E0813 01:23:37.655379 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65f7979575-9d4tp" Aug 13 01:23:37.655416 kubelet[2284]: E0813 01:23:37.655391 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65f7979575-9d4tp" Aug 13 01:23:37.655461 kubelet[2284]: E0813 01:23:37.655422 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65f7979575-9d4tp_calico-system(4c32448d-bd5d-4dff-a3e1-988c6198e659)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65f7979575-9d4tp_calico-system(4c32448d-bd5d-4dff-a3e1-988c6198e659)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65f7979575-9d4tp" podUID="4c32448d-bd5d-4dff-a3e1-988c6198e659" Aug 13 01:23:37.662856 env[1382]: time="2025-08-13T01:23:37.662827215Z" level=error msg="Failed to destroy network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.663056 env[1382]: time="2025-08-13T01:23:37.663038356Z" level=error msg="encountered an error cleaning up failed sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.663115 env[1382]: time="2025-08-13T01:23:37.663067001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-9ggd9,Uid:eeef8dd3-2401-4787-8ede-caf069e52bbf,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.663208 kubelet[2284]: E0813 01:23:37.663191 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.663250 kubelet[2284]: E0813 01:23:37.663221 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b95cb99-9ggd9" Aug 13 01:23:37.663250 kubelet[2284]: E0813 01:23:37.663234 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67b95cb99-9ggd9" Aug 13 01:23:37.663324 kubelet[2284]: E0813 01:23:37.663256 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67b95cb99-9ggd9_calico-apiserver(eeef8dd3-2401-4787-8ede-caf069e52bbf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67b95cb99-9ggd9_calico-apiserver(eeef8dd3-2401-4787-8ede-caf069e52bbf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b95cb99-9ggd9" podUID="eeef8dd3-2401-4787-8ede-caf069e52bbf" Aug 13 01:23:37.665658 env[1382]: time="2025-08-13T01:23:37.665633762Z" level=error msg="Failed to destroy network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.665863 env[1382]: time="2025-08-13T01:23:37.665845051Z" level=error msg="encountered an error cleaning up failed sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.665903 env[1382]: time="2025-08-13T01:23:37.665873939Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5658fcd68c-wsmfs,Uid:0aa13eb1-cd4b-4065-b579-c9678960e68d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.665964 kubelet[2284]: E0813 01:23:37.665948 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.665999 kubelet[2284]: E0813 01:23:37.665970 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5658fcd68c-wsmfs" Aug 13 01:23:37.665999 kubelet[2284]: E0813 01:23:37.665981 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5658fcd68c-wsmfs" Aug 13 01:23:37.666283 kubelet[2284]: E0813 01:23:37.666003 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5658fcd68c-wsmfs_calico-system(0aa13eb1-cd4b-4065-b579-c9678960e68d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5658fcd68c-wsmfs_calico-system(0aa13eb1-cd4b-4065-b579-c9678960e68d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5658fcd68c-wsmfs" podUID="0aa13eb1-cd4b-4065-b579-c9678960e68d" Aug 13 01:23:37.673016 env[1382]: time="2025-08-13T01:23:37.672987516Z" level=error msg="Failed to destroy network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.673390 env[1382]: time="2025-08-13T01:23:37.673193945Z" level=error msg="encountered an error cleaning up failed sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.673390 env[1382]: time="2025-08-13T01:23:37.673220690Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8vb4,Uid:0722f301-b3c1-4eb6-ad2d-cc09409b96c2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.673445 kubelet[2284]: E0813 01:23:37.673308 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:37.673445 kubelet[2284]: E0813 01:23:37.673334 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t8vb4" Aug 13 01:23:37.673445 kubelet[2284]: E0813 01:23:37.673348 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-t8vb4" Aug 13 01:23:37.675658 kubelet[2284]: E0813 01:23:37.673370 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-t8vb4_kube-system(0722f301-b3c1-4eb6-ad2d-cc09409b96c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-t8vb4_kube-system(0722f301-b3c1-4eb6-ad2d-cc09409b96c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t8vb4" podUID="0722f301-b3c1-4eb6-ad2d-cc09409b96c2" Aug 13 01:23:37.971269 env[1382]: time="2025-08-13T01:23:37.971202869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kfxn,Uid:24914c97-a643-4e2a-b954-9959ef2f43e1,Namespace:calico-system,Attempt:0,}" Aug 13 01:23:38.005078 env[1382]: time="2025-08-13T01:23:38.005043815Z" level=error msg="Failed to destroy network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.007784 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b-shm.mount: Deactivated successfully. Aug 13 01:23:38.008735 env[1382]: time="2025-08-13T01:23:38.008711388Z" level=error msg="encountered an error cleaning up failed sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.009719 env[1382]: time="2025-08-13T01:23:38.008744148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kfxn,Uid:24914c97-a643-4e2a-b954-9959ef2f43e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.010399 kubelet[2284]: E0813 01:23:38.008969 2284 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.010399 kubelet[2284]: E0813 01:23:38.009015 2284 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kfxn" Aug 13 01:23:38.010399 kubelet[2284]: E0813 01:23:38.009027 2284 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kfxn" Aug 13 01:23:38.010483 kubelet[2284]: E0813 01:23:38.009058 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7kfxn_calico-system(24914c97-a643-4e2a-b954-9959ef2f43e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7kfxn_calico-system(24914c97-a643-4e2a-b954-9959ef2f43e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:38.078958 kubelet[2284]: I0813 01:23:38.078678 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:23:38.080215 env[1382]: time="2025-08-13T01:23:38.080195218Z" level=info msg="StopPodSandbox for \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\"" Aug 13 01:23:38.080912 kubelet[2284]: I0813 01:23:38.080875 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:23:38.082001 env[1382]: time="2025-08-13T01:23:38.081918638Z" level=info msg="StopPodSandbox for \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\"" Aug 13 01:23:38.082221 kubelet[2284]: I0813 01:23:38.082211 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:23:38.082980 env[1382]: time="2025-08-13T01:23:38.082965846Z" level=info msg="StopPodSandbox for \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\"" Aug 13 01:23:38.084044 kubelet[2284]: I0813 01:23:38.083661 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:23:38.084964 env[1382]: time="2025-08-13T01:23:38.084943599Z" level=info msg="StopPodSandbox for \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\"" Aug 13 01:23:38.086533 kubelet[2284]: I0813 01:23:38.086518 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:23:38.087293 env[1382]: time="2025-08-13T01:23:38.086885338Z" level=info msg="StopPodSandbox for \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\"" Aug 13 01:23:38.088776 kubelet[2284]: I0813 01:23:38.088334 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:23:38.088823 env[1382]: time="2025-08-13T01:23:38.088638257Z" level=info msg="StopPodSandbox for \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\"" Aug 13 01:23:38.089627 kubelet[2284]: I0813 01:23:38.089393 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:23:38.090128 env[1382]: time="2025-08-13T01:23:38.089893259Z" level=info msg="StopPodSandbox for \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\"" Aug 13 01:23:38.091071 kubelet[2284]: I0813 01:23:38.090836 2284 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:23:38.091615 env[1382]: time="2025-08-13T01:23:38.091329000Z" level=info msg="StopPodSandbox for \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\"" Aug 13 01:23:38.120869 env[1382]: time="2025-08-13T01:23:38.120836950Z" level=error msg="StopPodSandbox for \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\" failed" error="failed to destroy network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.121127 kubelet[2284]: E0813 01:23:38.121070 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:23:38.121231 env[1382]: time="2025-08-13T01:23:38.121214646Z" level=error msg="StopPodSandbox for \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\" failed" error="failed to destroy network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.121337 kubelet[2284]: E0813 01:23:38.121319 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:23:38.122195 kubelet[2284]: E0813 01:23:38.122097 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b"} Aug 13 01:23:38.122195 kubelet[2284]: E0813 01:23:38.122163 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24914c97-a643-4e2a-b954-9959ef2f43e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.122195 kubelet[2284]: E0813 01:23:38.122180 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c"} Aug 13 01:23:38.122650 kubelet[2284]: E0813 01:23:38.122205 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"12930a0e-5b90-4155-97e2-3a62414b20c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.122650 kubelet[2284]: E0813 01:23:38.122218 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"12930a0e-5b90-4155-97e2-3a62414b20c0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-d66sc" podUID="12930a0e-5b90-4155-97e2-3a62414b20c0" Aug 13 01:23:38.122650 kubelet[2284]: E0813 01:23:38.122180 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24914c97-a643-4e2a-b954-9959ef2f43e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kfxn" podUID="24914c97-a643-4e2a-b954-9959ef2f43e1" Aug 13 01:23:38.138786 env[1382]: time="2025-08-13T01:23:38.138749475Z" level=error msg="StopPodSandbox for \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\" failed" error="failed to destroy network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.139090 kubelet[2284]: E0813 01:23:38.138992 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:23:38.139090 kubelet[2284]: E0813 01:23:38.139025 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153"} Aug 13 01:23:38.139090 kubelet[2284]: E0813 01:23:38.139049 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0aa13eb1-cd4b-4065-b579-c9678960e68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.139090 kubelet[2284]: E0813 01:23:38.139065 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0aa13eb1-cd4b-4065-b579-c9678960e68d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5658fcd68c-wsmfs" podUID="0aa13eb1-cd4b-4065-b579-c9678960e68d" Aug 13 01:23:38.149452 env[1382]: time="2025-08-13T01:23:38.149415782Z" level=error msg="StopPodSandbox for \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\" failed" error="failed to destroy network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.149750 kubelet[2284]: E0813 01:23:38.149629 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:23:38.149750 kubelet[2284]: E0813 01:23:38.149673 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a"} Aug 13 01:23:38.149750 kubelet[2284]: E0813 01:23:38.149706 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eeef8dd3-2401-4787-8ede-caf069e52bbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.149750 kubelet[2284]: E0813 01:23:38.149720 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eeef8dd3-2401-4787-8ede-caf069e52bbf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b95cb99-9ggd9" podUID="eeef8dd3-2401-4787-8ede-caf069e52bbf" Aug 13 01:23:38.154825 env[1382]: time="2025-08-13T01:23:38.154782949Z" level=error msg="StopPodSandbox for \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\" failed" error="failed to destroy network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.155006 kubelet[2284]: E0813 01:23:38.154938 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:23:38.155006 kubelet[2284]: E0813 01:23:38.154959 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917"} Aug 13 01:23:38.155006 kubelet[2284]: E0813 01:23:38.154977 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0722f301-b3c1-4eb6-ad2d-cc09409b96c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.155006 kubelet[2284]: E0813 01:23:38.154989 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0722f301-b3c1-4eb6-ad2d-cc09409b96c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-t8vb4" podUID="0722f301-b3c1-4eb6-ad2d-cc09409b96c2" Aug 13 01:23:38.158378 env[1382]: time="2025-08-13T01:23:38.158322484Z" level=error msg="StopPodSandbox for \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\" failed" error="failed to destroy network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.158574 kubelet[2284]: E0813 01:23:38.158472 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:23:38.158574 kubelet[2284]: E0813 01:23:38.158510 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5"} Aug 13 01:23:38.158574 kubelet[2284]: E0813 01:23:38.158527 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a621f640-c17c-4f08-9759-f53f10bbc599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.158574 kubelet[2284]: E0813 01:23:38.158553 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a621f640-c17c-4f08-9759-f53f10bbc599\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67b95cb99-jl8kx" podUID="a621f640-c17c-4f08-9759-f53f10bbc599" Aug 13 01:23:38.165267 env[1382]: time="2025-08-13T01:23:38.165235200Z" level=error msg="StopPodSandbox for \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\" failed" error="failed to destroy network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.165447 kubelet[2284]: E0813 01:23:38.165377 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:23:38.165447 kubelet[2284]: E0813 01:23:38.165399 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8"} Aug 13 01:23:38.165447 kubelet[2284]: E0813 01:23:38.165419 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4c32448d-bd5d-4dff-a3e1-988c6198e659\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.165447 kubelet[2284]: E0813 01:23:38.165430 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4c32448d-bd5d-4dff-a3e1-988c6198e659\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65f7979575-9d4tp" podUID="4c32448d-bd5d-4dff-a3e1-988c6198e659" Aug 13 01:23:38.165669 env[1382]: time="2025-08-13T01:23:38.165650423Z" level=error msg="StopPodSandbox for \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\" failed" error="failed to destroy network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:23:38.165807 kubelet[2284]: E0813 01:23:38.165750 2284 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:23:38.165807 kubelet[2284]: E0813 01:23:38.165765 2284 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d"} Aug 13 01:23:38.165807 kubelet[2284]: E0813 01:23:38.165778 2284 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 01:23:38.165807 kubelet[2284]: E0813 01:23:38.165789 2284 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-gg2v2" podUID="a40d20da-5325-4fa5-9cd5-94a58f7ee4b0" Aug 13 01:23:45.324734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519498664.mount: Deactivated successfully. Aug 13 01:23:45.381237 env[1382]: time="2025-08-13T01:23:45.381206635Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:45.382527 env[1382]: time="2025-08-13T01:23:45.382510414Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:45.383423 env[1382]: time="2025-08-13T01:23:45.383408760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:45.384337 env[1382]: time="2025-08-13T01:23:45.384321558Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:45.384609 env[1382]: time="2025-08-13T01:23:45.384590313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:23:45.431115 env[1382]: time="2025-08-13T01:23:45.431085088Z" level=info msg="CreateContainer within sandbox \"d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:23:45.441067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1175084324.mount: Deactivated successfully. Aug 13 01:23:45.444651 env[1382]: time="2025-08-13T01:23:45.442144149Z" level=info msg="CreateContainer within sandbox \"d9a707e69009da17be3710545f1088f68fc8513c2d89dd919d29ac5bf2c75412\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c2fb6e3a6f053a836d1da16768973f18403dcfdb533e65f376cbca3a21b03384\"" Aug 13 01:23:45.446593 env[1382]: time="2025-08-13T01:23:45.445974611Z" level=info msg="StartContainer for \"c2fb6e3a6f053a836d1da16768973f18403dcfdb533e65f376cbca3a21b03384\"" Aug 13 01:23:45.487021 env[1382]: time="2025-08-13T01:23:45.486991754Z" level=info msg="StartContainer for \"c2fb6e3a6f053a836d1da16768973f18403dcfdb533e65f376cbca3a21b03384\" returns successfully" Aug 13 01:23:45.907910 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:23:45.908754 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:23:46.158662 kubelet[2284]: I0813 01:23:46.148800 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r9dzz" podStartSLOduration=1.782658069 podStartE2EDuration="20.140945817s" podCreationTimestamp="2025-08-13 01:23:26 +0000 UTC" firstStartedPulling="2025-08-13 01:23:27.026957326 +0000 UTC m=+18.312178184" lastFinishedPulling="2025-08-13 01:23:45.385245072 +0000 UTC m=+36.670465932" observedRunningTime="2025-08-13 01:23:46.140158697 +0000 UTC m=+37.425379569" watchObservedRunningTime="2025-08-13 01:23:46.140945817 +0000 UTC m=+37.426166680" Aug 13 01:23:46.184641 env[1382]: time="2025-08-13T01:23:46.184437385Z" level=info msg="StopPodSandbox for \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\"" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.264 [INFO][3476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.265 [INFO][3476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" iface="eth0" netns="/var/run/netns/cni-924431b5-6b5b-aaeb-f3f9-843491f8a927" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.265 [INFO][3476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" iface="eth0" netns="/var/run/netns/cni-924431b5-6b5b-aaeb-f3f9-843491f8a927" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.266 [INFO][3476] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" iface="eth0" netns="/var/run/netns/cni-924431b5-6b5b-aaeb-f3f9-843491f8a927" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.266 [INFO][3476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.266 [INFO][3476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.531 [INFO][3483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.539 [INFO][3483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.539 [INFO][3483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.550 [WARNING][3483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.550 [INFO][3483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.550 [INFO][3483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:46.556545 env[1382]: 2025-08-13 01:23:46.552 [INFO][3476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:23:46.556545 env[1382]: time="2025-08-13T01:23:46.555611225Z" level=info msg="TearDown network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\" successfully" Aug 13 01:23:46.556545 env[1382]: time="2025-08-13T01:23:46.555633475Z" level=info msg="StopPodSandbox for \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\" returns successfully" Aug 13 01:23:46.555559 systemd[1]: run-netns-cni\x2d924431b5\x2d6b5b\x2daaeb\x2df3f9\x2d843491f8a927.mount: Deactivated successfully. Aug 13 01:23:46.635251 kubelet[2284]: I0813 01:23:46.635229 2284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-backend-key-pair\") pod \"0aa13eb1-cd4b-4065-b579-c9678960e68d\" (UID: \"0aa13eb1-cd4b-4065-b579-c9678960e68d\") " Aug 13 01:23:46.635409 kubelet[2284]: I0813 01:23:46.635399 2284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wccw6\" (UniqueName: \"kubernetes.io/projected/0aa13eb1-cd4b-4065-b579-c9678960e68d-kube-api-access-wccw6\") pod \"0aa13eb1-cd4b-4065-b579-c9678960e68d\" (UID: \"0aa13eb1-cd4b-4065-b579-c9678960e68d\") " Aug 13 01:23:46.635480 kubelet[2284]: I0813 01:23:46.635470 2284 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-ca-bundle\") pod \"0aa13eb1-cd4b-4065-b579-c9678960e68d\" (UID: \"0aa13eb1-cd4b-4065-b579-c9678960e68d\") " Aug 13 01:23:46.647990 kubelet[2284]: I0813 01:23:46.641354 2284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0aa13eb1-cd4b-4065-b579-c9678960e68d" (UID: "0aa13eb1-cd4b-4065-b579-c9678960e68d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:23:46.653531 systemd[1]: var-lib-kubelet-pods-0aa13eb1\x2dcd4b\x2d4065\x2db579\x2dc9678960e68d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwccw6.mount: Deactivated successfully. Aug 13 01:23:46.653638 systemd[1]: var-lib-kubelet-pods-0aa13eb1\x2dcd4b\x2d4065\x2db579\x2dc9678960e68d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:23:46.655790 kubelet[2284]: I0813 01:23:46.655767 2284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa13eb1-cd4b-4065-b579-c9678960e68d-kube-api-access-wccw6" (OuterVolumeSpecName: "kube-api-access-wccw6") pod "0aa13eb1-cd4b-4065-b579-c9678960e68d" (UID: "0aa13eb1-cd4b-4065-b579-c9678960e68d"). InnerVolumeSpecName "kube-api-access-wccw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:23:46.656457 kubelet[2284]: I0813 01:23:46.655812 2284 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0aa13eb1-cd4b-4065-b579-c9678960e68d" (UID: "0aa13eb1-cd4b-4065-b579-c9678960e68d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:23:46.736456 kubelet[2284]: I0813 01:23:46.736420 2284 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wccw6\" (UniqueName: \"kubernetes.io/projected/0aa13eb1-cd4b-4065-b579-c9678960e68d-kube-api-access-wccw6\") on node \"localhost\" DevicePath \"\"" Aug 13 01:23:46.736456 kubelet[2284]: I0813 01:23:46.736453 2284 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 13 01:23:46.736456 kubelet[2284]: I0813 01:23:46.736462 2284 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0aa13eb1-cd4b-4065-b579-c9678960e68d-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 13 01:23:47.126540 kubelet[2284]: I0813 01:23:47.125632 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:23:47.242505 kubelet[2284]: I0813 01:23:47.242478 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4bnb\" (UniqueName: \"kubernetes.io/projected/cecb21ff-c219-493a-b2c9-8ebaedbbe331-kube-api-access-b4bnb\") pod \"whisker-5f9975b665-sq7zc\" (UID: \"cecb21ff-c219-493a-b2c9-8ebaedbbe331\") " pod="calico-system/whisker-5f9975b665-sq7zc" Aug 13 01:23:47.242826 kubelet[2284]: I0813 01:23:47.242815 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cecb21ff-c219-493a-b2c9-8ebaedbbe331-whisker-backend-key-pair\") pod \"whisker-5f9975b665-sq7zc\" (UID: \"cecb21ff-c219-493a-b2c9-8ebaedbbe331\") " pod="calico-system/whisker-5f9975b665-sq7zc" Aug 13 01:23:47.242953 kubelet[2284]: I0813 01:23:47.242933 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cecb21ff-c219-493a-b2c9-8ebaedbbe331-whisker-ca-bundle\") pod \"whisker-5f9975b665-sq7zc\" (UID: \"cecb21ff-c219-493a-b2c9-8ebaedbbe331\") " pod="calico-system/whisker-5f9975b665-sq7zc" Aug 13 01:23:47.423313 kernel: kauditd_printk_skb: 25 callbacks suppressed Aug 13 01:23:47.427098 kernel: audit: type=1400 audit(1755048227.418:298): avc: denied { write } for pid=3559 comm="tee" name="fd" dev="proc" ino=36866 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.428052 kernel: audit: type=1300 audit(1755048227.418:298): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe08b0a7ed a2=241 a3=1b6 items=1 ppid=3514 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.418000 audit[3559]: AVC avc: denied { write } for pid=3559 comm="tee" name="fd" dev="proc" ino=36866 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.438151 kernel: audit: type=1307 audit(1755048227.418:298): cwd="/etc/service/enabled/felix/log" Aug 13 01:23:47.438192 kernel: audit: type=1302 audit(1755048227.418:298): item=0 name="/dev/fd/63" inode=35837 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.418000 audit[3559]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe08b0a7ed a2=241 a3=1b6 items=1 ppid=3514 pid=3559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.444182 kernel: audit: type=1327 audit(1755048227.418:298): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.418000 audit: CWD cwd="/etc/service/enabled/felix/log" Aug 13 01:23:47.418000 audit: PATH item=0 name="/dev/fd/63" inode=35837 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.418000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.422000 audit[3564]: AVC avc: denied { write } for pid=3564 comm="tee" name="fd" dev="proc" ino=36877 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.422000 audit[3564]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd57cc17ee a2=241 a3=1b6 items=1 ppid=3519 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.453149 kernel: audit: type=1400 audit(1755048227.422:299): avc: denied { write } for pid=3564 comm="tee" name="fd" dev="proc" ino=36877 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.453603 kernel: audit: type=1300 audit(1755048227.422:299): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd57cc17ee a2=241 a3=1b6 items=1 ppid=3519 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.422000 audit: CWD cwd="/etc/service/enabled/bird/log" Aug 13 01:23:47.422000 audit: PATH item=0 name="/dev/fd/63" inode=36874 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.459208 kernel: audit: type=1307 audit(1755048227.422:299): cwd="/etc/service/enabled/bird/log" Aug 13 01:23:47.459240 kernel: audit: type=1302 audit(1755048227.422:299): item=0 name="/dev/fd/63" inode=36874 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.459257 kernel: audit: type=1327 audit(1755048227.422:299): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.422000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.426000 audit[3567]: AVC avc: denied { write } for pid=3567 comm="tee" name="fd" dev="proc" ino=36881 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.426000 audit[3567]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff5db2a7ef a2=241 a3=1b6 items=1 ppid=3512 pid=3567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.426000 audit: CWD cwd="/etc/service/enabled/cni/log" Aug 13 01:23:47.426000 audit: PATH item=0 name="/dev/fd/63" inode=36232 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.426000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.431000 audit[3571]: AVC avc: denied { write } for pid=3571 comm="tee" name="fd" dev="proc" ino=36892 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.431000 audit[3571]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd4c7467dd a2=241 a3=1b6 items=1 ppid=3522 pid=3571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.431000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Aug 13 01:23:47.431000 audit: PATH item=0 name="/dev/fd/63" inode=36887 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.431000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.434000 audit[3577]: AVC avc: denied { write } for pid=3577 comm="tee" name="fd" dev="proc" ino=36898 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.434000 audit[3577]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc259f27de a2=241 a3=1b6 items=1 ppid=3517 pid=3577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.434000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Aug 13 01:23:47.434000 audit: PATH item=0 name="/dev/fd/63" inode=36233 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.434000 audit[3576]: AVC avc: denied { write } for pid=3576 comm="tee" name="fd" dev="proc" ino=36902 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.434000 audit[3576]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffcd20f7ed a2=241 a3=1b6 items=1 ppid=3515 pid=3576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.434000 audit: CWD cwd="/etc/service/enabled/bird6/log" Aug 13 01:23:47.434000 audit: PATH item=0 name="/dev/fd/63" inode=36234 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.434000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.439000 audit[3580]: AVC avc: denied { write } for pid=3580 comm="tee" name="fd" dev="proc" ino=36910 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Aug 13 01:23:47.439000 audit[3580]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffeaf1e97ed a2=241 a3=1b6 items=1 ppid=3521 pid=3580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:47.439000 audit: CWD cwd="/etc/service/enabled/confd/log" Aug 13 01:23:47.439000 audit: PATH item=0 name="/dev/fd/63" inode=36235 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Aug 13 01:23:47.439000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Aug 13 01:23:47.468140 env[1382]: time="2025-08-13T01:23:47.468114969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f9975b665-sq7zc,Uid:cecb21ff-c219-493a-b2c9-8ebaedbbe331,Namespace:calico-system,Attempt:0,}" Aug 13 01:23:47.792220 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:23:47.792303 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1eb6e3cdb16: link becomes ready Aug 13 01:23:47.794274 systemd-networkd[1112]: cali1eb6e3cdb16: Link UP Aug 13 01:23:47.794384 systemd-networkd[1112]: cali1eb6e3cdb16: Gained carrier Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.518 [INFO][3588] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.525 [INFO][3588] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5f9975b665--sq7zc-eth0 whisker-5f9975b665- calico-system cecb21ff-c219-493a-b2c9-8ebaedbbe331 872 0 2025-08-13 01:23:47 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5f9975b665 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5f9975b665-sq7zc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1eb6e3cdb16 [] [] }} ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.525 [INFO][3588] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.615 [INFO][3603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" HandleID="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Workload="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.621 [INFO][3603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" HandleID="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Workload="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5f9975b665-sq7zc", "timestamp":"2025-08-13 01:23:47.615274365 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.621 [INFO][3603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.621 [INFO][3603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.621 [INFO][3603] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.671 [INFO][3603] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.765 [INFO][3603] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.768 [INFO][3603] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.769 [INFO][3603] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.770 [INFO][3603] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.770 [INFO][3603] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.771 [INFO][3603] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421 Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.772 [INFO][3603] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.775 [INFO][3603] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.775 [INFO][3603] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" host="localhost" Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.775 [INFO][3603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:47.805590 env[1382]: 2025-08-13 01:23:47.775 [INFO][3603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" HandleID="k8s-pod-network.bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Workload="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" Aug 13 01:23:47.808051 env[1382]: 2025-08-13 01:23:47.778 [INFO][3588] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f9975b665--sq7zc-eth0", GenerateName:"whisker-5f9975b665-", Namespace:"calico-system", SelfLink:"", UID:"cecb21ff-c219-493a-b2c9-8ebaedbbe331", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f9975b665", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5f9975b665-sq7zc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1eb6e3cdb16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:47.808051 env[1382]: 2025-08-13 01:23:47.778 [INFO][3588] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" Aug 13 01:23:47.808051 env[1382]: 2025-08-13 01:23:47.778 [INFO][3588] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1eb6e3cdb16 ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" Aug 13 01:23:47.808051 env[1382]: 2025-08-13 01:23:47.793 [INFO][3588] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" Aug 13 01:23:47.808051 env[1382]: 2025-08-13 01:23:47.793 [INFO][3588] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5f9975b665--sq7zc-eth0", GenerateName:"whisker-5f9975b665-", Namespace:"calico-system", SelfLink:"", UID:"cecb21ff-c219-493a-b2c9-8ebaedbbe331", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5f9975b665", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421", Pod:"whisker-5f9975b665-sq7zc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1eb6e3cdb16", MAC:"fa:2c:a2:2c:ca:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:47.808051 env[1382]: 2025-08-13 01:23:47.802 [INFO][3588] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421" Namespace="calico-system" Pod="whisker-5f9975b665-sq7zc" WorkloadEndpoint="localhost-k8s-whisker--5f9975b665--sq7zc-eth0" Aug 13 01:23:47.812355 env[1382]: time="2025-08-13T01:23:47.812297871Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:47.812432 env[1382]: time="2025-08-13T01:23:47.812418415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:47.812502 env[1382]: time="2025-08-13T01:23:47.812481984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:47.816047 env[1382]: time="2025-08-13T01:23:47.812630623Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421 pid=3625 runtime=io.containerd.runc.v2 Aug 13 01:23:47.835949 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:47.863389 env[1382]: time="2025-08-13T01:23:47.863362831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5f9975b665-sq7zc,Uid:cecb21ff-c219-493a-b2c9-8ebaedbbe331,Namespace:calico-system,Attempt:0,} returns sandbox id \"bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421\"" Aug 13 01:23:47.890461 env[1382]: time="2025-08-13T01:23:47.890441155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 01:23:48.969661 env[1382]: time="2025-08-13T01:23:48.969585030Z" level=info msg="StopPodSandbox for \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\"" Aug 13 01:23:48.971747 env[1382]: time="2025-08-13T01:23:48.970599476Z" level=info msg="StopPodSandbox for \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\"" Aug 13 01:23:48.972662 kubelet[2284]: I0813 01:23:48.972634 2284 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0aa13eb1-cd4b-4065-b579-c9678960e68d" path="/var/lib/kubelet/pods/0aa13eb1-cd4b-4065-b579-c9678960e68d/volumes" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.004 [INFO][3698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.004 [INFO][3698] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" iface="eth0" netns="/var/run/netns/cni-3aa08805-ca17-7f47-f72f-37bb7dc87207" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.004 [INFO][3698] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" iface="eth0" netns="/var/run/netns/cni-3aa08805-ca17-7f47-f72f-37bb7dc87207" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.004 [INFO][3698] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" iface="eth0" netns="/var/run/netns/cni-3aa08805-ca17-7f47-f72f-37bb7dc87207" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.004 [INFO][3698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.004 [INFO][3698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.023 [INFO][3711] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.023 [INFO][3711] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.023 [INFO][3711] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.026 [WARNING][3711] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.026 [INFO][3711] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.027 [INFO][3711] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:49.029321 env[1382]: 2025-08-13 01:23:49.028 [INFO][3698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:23:49.030984 systemd[1]: run-netns-cni\x2d3aa08805\x2dca17\x2d7f47\x2df72f\x2d37bb7dc87207.mount: Deactivated successfully. Aug 13 01:23:49.032409 env[1382]: time="2025-08-13T01:23:49.032383915Z" level=info msg="TearDown network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\" successfully" Aug 13 01:23:49.032450 env[1382]: time="2025-08-13T01:23:49.032409385Z" level=info msg="StopPodSandbox for \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\" returns successfully" Aug 13 01:23:49.032876 env[1382]: time="2025-08-13T01:23:49.032856897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f7979575-9d4tp,Uid:4c32448d-bd5d-4dff-a3e1-988c6198e659,Namespace:calico-system,Attempt:1,}" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.015 [INFO][3703] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.015 [INFO][3703] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" iface="eth0" netns="/var/run/netns/cni-82591c5d-4355-fde2-51f5-6fdc555cba6d" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.016 [INFO][3703] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" iface="eth0" netns="/var/run/netns/cni-82591c5d-4355-fde2-51f5-6fdc555cba6d" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.016 [INFO][3703] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" iface="eth0" netns="/var/run/netns/cni-82591c5d-4355-fde2-51f5-6fdc555cba6d" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.016 [INFO][3703] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.016 [INFO][3703] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.036 [INFO][3716] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.036 [INFO][3716] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.036 [INFO][3716] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.046 [WARNING][3716] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.046 [INFO][3716] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.046 [INFO][3716] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:49.049149 env[1382]: 2025-08-13 01:23:49.048 [INFO][3703] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:23:49.050886 systemd[1]: run-netns-cni\x2d82591c5d\x2d4355\x2dfde2\x2d51f5\x2d6fdc555cba6d.mount: Deactivated successfully. Aug 13 01:23:49.051751 env[1382]: time="2025-08-13T01:23:49.051729970Z" level=info msg="TearDown network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\" successfully" Aug 13 01:23:49.051809 env[1382]: time="2025-08-13T01:23:49.051797049Z" level=info msg="StopPodSandbox for \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\" returns successfully" Aug 13 01:23:49.052211 env[1382]: time="2025-08-13T01:23:49.052197908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8vb4,Uid:0722f301-b3c1-4eb6-ad2d-cc09409b96c2,Namespace:kube-system,Attempt:1,}" Aug 13 01:23:49.126575 systemd-networkd[1112]: cali213d7d89350: Link UP Aug 13 01:23:49.129701 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:23:49.129753 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali213d7d89350: link becomes ready Aug 13 01:23:49.129840 systemd-networkd[1112]: cali213d7d89350: Gained carrier Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.065 [INFO][3725] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.073 [INFO][3725] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0 calico-kube-controllers-65f7979575- calico-system 4c32448d-bd5d-4dff-a3e1-988c6198e659 883 0 2025-08-13 01:23:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65f7979575 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-65f7979575-9d4tp eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali213d7d89350 [] [] }} ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.073 [INFO][3725] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.101 [INFO][3747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" HandleID="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.101 [INFO][3747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" HandleID="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-65f7979575-9d4tp", "timestamp":"2025-08-13 01:23:49.101685179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.102 [INFO][3747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.102 [INFO][3747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.102 [INFO][3747] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.108 [INFO][3747] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.110 [INFO][3747] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.112 [INFO][3747] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.113 [INFO][3747] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.114 [INFO][3747] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.114 [INFO][3747] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.114 [INFO][3747] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5 Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.117 [INFO][3747] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.120 [INFO][3747] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.120 [INFO][3747] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" host="localhost" Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.120 [INFO][3747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:49.143168 env[1382]: 2025-08-13 01:23:49.120 [INFO][3747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" HandleID="k8s-pod-network.5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.144210 env[1382]: 2025-08-13 01:23:49.124 [INFO][3725] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0", GenerateName:"calico-kube-controllers-65f7979575-", Namespace:"calico-system", SelfLink:"", UID:"4c32448d-bd5d-4dff-a3e1-988c6198e659", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f7979575", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-65f7979575-9d4tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali213d7d89350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:49.144210 env[1382]: 2025-08-13 01:23:49.124 [INFO][3725] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.144210 env[1382]: 2025-08-13 01:23:49.124 [INFO][3725] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali213d7d89350 ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.144210 env[1382]: 2025-08-13 01:23:49.129 [INFO][3725] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.144210 env[1382]: 2025-08-13 01:23:49.134 [INFO][3725] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0", GenerateName:"calico-kube-controllers-65f7979575-", Namespace:"calico-system", SelfLink:"", UID:"4c32448d-bd5d-4dff-a3e1-988c6198e659", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f7979575", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5", Pod:"calico-kube-controllers-65f7979575-9d4tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali213d7d89350", MAC:"2a:0d:43:22:90:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:49.144210 env[1382]: 2025-08-13 01:23:49.141 [INFO][3725] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5" Namespace="calico-system" Pod="calico-kube-controllers-65f7979575-9d4tp" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:23:49.152836 env[1382]: time="2025-08-13T01:23:49.152793217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:49.152953 env[1382]: time="2025-08-13T01:23:49.152940014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:49.153018 env[1382]: time="2025-08-13T01:23:49.153005754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:49.153163 env[1382]: time="2025-08-13T01:23:49.153141818Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5 pid=3774 runtime=io.containerd.runc.v2 Aug 13 01:23:49.169618 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:49.187754 env[1382]: time="2025-08-13T01:23:49.187731178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65f7979575-9d4tp,Uid:4c32448d-bd5d-4dff-a3e1-988c6198e659,Namespace:calico-system,Attempt:1,} returns sandbox id \"5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5\"" Aug 13 01:23:49.229920 systemd-networkd[1112]: calid35276fe512: Link UP Aug 13 01:23:49.232173 systemd-networkd[1112]: calid35276fe512: Gained carrier Aug 13 01:23:49.232739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid35276fe512: link becomes ready Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.078 [INFO][3734] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.088 [INFO][3734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0 coredns-7c65d6cfc9- kube-system 0722f301-b3c1-4eb6-ad2d-cc09409b96c2 884 0 2025-08-13 01:23:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-t8vb4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid35276fe512 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.088 [INFO][3734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.113 [INFO][3753] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" HandleID="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.113 [INFO][3753] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" HandleID="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000251100), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-t8vb4", "timestamp":"2025-08-13 01:23:49.11372355 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.114 [INFO][3753] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.121 [INFO][3753] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.121 [INFO][3753] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.209 [INFO][3753] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.211 [INFO][3753] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.215 [INFO][3753] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.215 [INFO][3753] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.217 [INFO][3753] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.217 [INFO][3753] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.217 [INFO][3753] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7 Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.219 [INFO][3753] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.223 [INFO][3753] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.223 [INFO][3753] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" host="localhost" Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.223 [INFO][3753] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:49.243036 env[1382]: 2025-08-13 01:23:49.223 [INFO][3753] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" HandleID="k8s-pod-network.dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.243648 env[1382]: 2025-08-13 01:23:49.225 [INFO][3734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0722f301-b3c1-4eb6-ad2d-cc09409b96c2", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-t8vb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35276fe512", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:49.243648 env[1382]: 2025-08-13 01:23:49.225 [INFO][3734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.243648 env[1382]: 2025-08-13 01:23:49.225 [INFO][3734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid35276fe512 ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.243648 env[1382]: 2025-08-13 01:23:49.235 [INFO][3734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.243648 env[1382]: 2025-08-13 01:23:49.235 [INFO][3734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0722f301-b3c1-4eb6-ad2d-cc09409b96c2", ResourceVersion:"884", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7", Pod:"coredns-7c65d6cfc9-t8vb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35276fe512", MAC:"a6:ea:52:a1:ae:71", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:49.243648 env[1382]: 2025-08-13 01:23:49.241 [INFO][3734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7" Namespace="kube-system" Pod="coredns-7c65d6cfc9-t8vb4" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:23:49.251157 env[1382]: time="2025-08-13T01:23:49.251120257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:49.251774 env[1382]: time="2025-08-13T01:23:49.251144907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:49.251774 env[1382]: time="2025-08-13T01:23:49.251154895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:49.251774 env[1382]: time="2025-08-13T01:23:49.251265281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7 pid=3822 runtime=io.containerd.runc.v2 Aug 13 01:23:49.270968 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:49.291184 env[1382]: time="2025-08-13T01:23:49.291153504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-t8vb4,Uid:0722f301-b3c1-4eb6-ad2d-cc09409b96c2,Namespace:kube-system,Attempt:1,} returns sandbox id \"dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7\"" Aug 13 01:23:49.301598 env[1382]: time="2025-08-13T01:23:49.301571650Z" level=info msg="CreateContainer within sandbox \"dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:23:49.313089 env[1382]: time="2025-08-13T01:23:49.313065356Z" level=info msg="CreateContainer within sandbox \"dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"884b1157b1b83e5dda44457dc7834ea91d0ca8e0f9be9cff23b01d0ed7d5afa6\"" Aug 13 01:23:49.314041 env[1382]: time="2025-08-13T01:23:49.313801270Z" level=info msg="StartContainer for \"884b1157b1b83e5dda44457dc7834ea91d0ca8e0f9be9cff23b01d0ed7d5afa6\"" Aug 13 01:23:49.355539 env[1382]: time="2025-08-13T01:23:49.355511667Z" level=info msg="StartContainer for \"884b1157b1b83e5dda44457dc7834ea91d0ca8e0f9be9cff23b01d0ed7d5afa6\" returns successfully" Aug 13 01:23:49.422502 env[1382]: time="2025-08-13T01:23:49.422479524Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:49.431799 env[1382]: time="2025-08-13T01:23:49.431781374Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:49.448248 env[1382]: time="2025-08-13T01:23:49.448231815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:49.466857 env[1382]: time="2025-08-13T01:23:49.466782354Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:49.467424 env[1382]: time="2025-08-13T01:23:49.467392660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 01:23:49.469845 env[1382]: time="2025-08-13T01:23:49.469821089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:23:49.470825 env[1382]: time="2025-08-13T01:23:49.470806019Z" level=info msg="CreateContainer within sandbox \"bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 01:23:49.482531 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2228719391.mount: Deactivated successfully. Aug 13 01:23:49.487652 env[1382]: time="2025-08-13T01:23:49.487628983Z" level=info msg="CreateContainer within sandbox \"bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"b7e2889de67eb244572bedcc6c62268d69d5291d23474928a823aa0a6f1f6a3b\"" Aug 13 01:23:49.488962 env[1382]: time="2025-08-13T01:23:49.488658748Z" level=info msg="StartContainer for \"b7e2889de67eb244572bedcc6c62268d69d5291d23474928a823aa0a6f1f6a3b\"" Aug 13 01:23:49.537149 env[1382]: time="2025-08-13T01:23:49.537118551Z" level=info msg="StartContainer for \"b7e2889de67eb244572bedcc6c62268d69d5291d23474928a823aa0a6f1f6a3b\" returns successfully" Aug 13 01:23:49.729839 systemd-networkd[1112]: cali1eb6e3cdb16: Gained IPv6LL Aug 13 01:23:49.970244 env[1382]: time="2025-08-13T01:23:49.970216034Z" level=info msg="StopPodSandbox for \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\"" Aug 13 01:23:49.971125 env[1382]: time="2025-08-13T01:23:49.970773984Z" level=info msg="StopPodSandbox for \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\"" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.017 [INFO][3966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.017 [INFO][3966] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" iface="eth0" netns="/var/run/netns/cni-cd0b2627-65dd-8a2c-d098-e78556c0779f" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.017 [INFO][3966] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" iface="eth0" netns="/var/run/netns/cni-cd0b2627-65dd-8a2c-d098-e78556c0779f" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.017 [INFO][3966] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" iface="eth0" netns="/var/run/netns/cni-cd0b2627-65dd-8a2c-d098-e78556c0779f" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.017 [INFO][3966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.017 [INFO][3966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.033 [INFO][3978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.033 [INFO][3978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.033 [INFO][3978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.036 [WARNING][3978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.036 [INFO][3978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.037 [INFO][3978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:50.049150 env[1382]: 2025-08-13 01:23:50.048 [INFO][3966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:23:50.050138 env[1382]: time="2025-08-13T01:23:50.049287968Z" level=info msg="TearDown network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\" successfully" Aug 13 01:23:50.050138 env[1382]: time="2025-08-13T01:23:50.049309690Z" level=info msg="StopPodSandbox for \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\" returns successfully" Aug 13 01:23:50.050138 env[1382]: time="2025-08-13T01:23:50.049908597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg2v2,Uid:a40d20da-5325-4fa5-9cd5-94a58f7ee4b0,Namespace:kube-system,Attempt:1,}" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.022 [INFO][3965] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.022 [INFO][3965] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" iface="eth0" netns="/var/run/netns/cni-04390af4-27bf-0da4-7216-9269b03f434a" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.022 [INFO][3965] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" iface="eth0" netns="/var/run/netns/cni-04390af4-27bf-0da4-7216-9269b03f434a" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.022 [INFO][3965] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" iface="eth0" netns="/var/run/netns/cni-04390af4-27bf-0da4-7216-9269b03f434a" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.022 [INFO][3965] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.022 [INFO][3965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.043 [INFO][3983] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.043 [INFO][3983] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.043 [INFO][3983] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.046 [WARNING][3983] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.046 [INFO][3983] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.047 [INFO][3983] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:50.050631 env[1382]: 2025-08-13 01:23:50.048 [INFO][3965] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:23:50.051079 env[1382]: time="2025-08-13T01:23:50.051066466Z" level=info msg="TearDown network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\" successfully" Aug 13 01:23:50.051135 env[1382]: time="2025-08-13T01:23:50.051119044Z" level=info msg="StopPodSandbox for \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\" returns successfully" Aug 13 01:23:50.051504 env[1382]: time="2025-08-13T01:23:50.051486410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-9ggd9,Uid:eeef8dd3-2401-4787-8ede-caf069e52bbf,Namespace:calico-apiserver,Attempt:1,}" Aug 13 01:23:50.132744 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:23:50.132811 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie666ed159cc: link becomes ready Aug 13 01:23:50.133317 systemd-networkd[1112]: calie666ed159cc: Link UP Aug 13 01:23:50.133462 systemd-networkd[1112]: calie666ed159cc: Gained carrier Aug 13 01:23:50.147264 kubelet[2284]: I0813 01:23:50.147194 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-t8vb4" podStartSLOduration=36.135255798 podStartE2EDuration="36.135255798s" podCreationTimestamp="2025-08-13 01:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:23:50.134786884 +0000 UTC m=+41.420007751" watchObservedRunningTime="2025-08-13 01:23:50.135255798 +0000 UTC m=+41.420476660" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.066 [INFO][3992] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.073 [INFO][3992] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0 coredns-7c65d6cfc9- kube-system a40d20da-5325-4fa5-9cd5-94a58f7ee4b0 904 0 2025-08-13 01:23:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-gg2v2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie666ed159cc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.073 [INFO][3992] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.102 [INFO][4017] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" HandleID="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.102 [INFO][4017] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" HandleID="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-gg2v2", "timestamp":"2025-08-13 01:23:50.102494273 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.102 [INFO][4017] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.102 [INFO][4017] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.102 [INFO][4017] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.106 [INFO][4017] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.109 [INFO][4017] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.113 [INFO][4017] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.115 [INFO][4017] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.116 [INFO][4017] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.116 [INFO][4017] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.117 [INFO][4017] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.119 [INFO][4017] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.123 [INFO][4017] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.123 [INFO][4017] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" host="localhost" Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.123 [INFO][4017] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:50.160284 env[1382]: 2025-08-13 01:23:50.123 [INFO][4017] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" HandleID="k8s-pod-network.3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.169606 env[1382]: 2025-08-13 01:23:50.126 [INFO][3992] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-gg2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie666ed159cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:50.169606 env[1382]: 2025-08-13 01:23:50.127 [INFO][3992] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.169606 env[1382]: 2025-08-13 01:23:50.127 [INFO][3992] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie666ed159cc ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.169606 env[1382]: 2025-08-13 01:23:50.146 [INFO][3992] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.169606 env[1382]: 2025-08-13 01:23:50.147 [INFO][3992] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e", Pod:"coredns-7c65d6cfc9-gg2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie666ed159cc", MAC:"ca:8a:8d:e1:16:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:50.169606 env[1382]: 2025-08-13 01:23:50.154 [INFO][3992] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-gg2v2" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:23:50.175649 env[1382]: time="2025-08-13T01:23:50.175592910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:50.175764 env[1382]: time="2025-08-13T01:23:50.175737846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:50.175764 env[1382]: time="2025-08-13T01:23:50.175751676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:50.176011 env[1382]: time="2025-08-13T01:23:50.175914746Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e pid=4045 runtime=io.containerd.runc.v2 Aug 13 01:23:50.181000 audit[4062]: NETFILTER_CFG table=filter:99 family=2 entries=19 op=nft_register_rule pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:50.181000 audit[4062]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffefb94b1c0 a2=0 a3=7ffefb94b1ac items=0 ppid=2387 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:50.181000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:50.193000 audit[4062]: NETFILTER_CFG table=nat:100 family=2 entries=33 op=nft_register_chain pid=4062 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:50.193000 audit[4062]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffefb94b1c0 a2=0 a3=7ffefb94b1ac items=0 ppid=2387 pid=4062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:50.193000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:50.196080 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:50.225000 audit[4082]: NETFILTER_CFG table=filter:101 family=2 entries=16 op=nft_register_rule pid=4082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:50.225000 audit[4082]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffde2b340c0 a2=0 a3=7ffde2b340ac items=0 ppid=2387 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:50.225000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:50.230500 systemd-networkd[1112]: calidd7e4cd8209: Link UP Aug 13 01:23:50.232016 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calidd7e4cd8209: link becomes ready Aug 13 01:23:50.231936 systemd-networkd[1112]: calidd7e4cd8209: Gained carrier Aug 13 01:23:50.231000 audit[4082]: NETFILTER_CFG table=nat:102 family=2 entries=18 op=nft_register_rule pid=4082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:50.231000 audit[4082]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffde2b340c0 a2=0 a3=0 items=0 ppid=2387 pid=4082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:50.231000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:50.240661 env[1382]: time="2025-08-13T01:23:50.240626311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gg2v2,Uid:a40d20da-5325-4fa5-9cd5-94a58f7ee4b0,Namespace:kube-system,Attempt:1,} returns sandbox id \"3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e\"" Aug 13 01:23:50.242796 env[1382]: time="2025-08-13T01:23:50.242332830Z" level=info msg="CreateContainer within sandbox \"3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.081 [INFO][4002] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.088 [INFO][4002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0 calico-apiserver-67b95cb99- calico-apiserver eeef8dd3-2401-4787-8ede-caf069e52bbf 905 0 2025-08-13 01:23:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b95cb99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b95cb99-9ggd9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd7e4cd8209 [] [] }} ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.088 [INFO][4002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.127 [INFO][4022] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" HandleID="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.147 [INFO][4022] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" HandleID="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8fa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b95cb99-9ggd9", "timestamp":"2025-08-13 01:23:50.127141555 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.147 [INFO][4022] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.147 [INFO][4022] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.147 [INFO][4022] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.207 [INFO][4022] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.210 [INFO][4022] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.212 [INFO][4022] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.213 [INFO][4022] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.214 [INFO][4022] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.215 [INFO][4022] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.216 [INFO][4022] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3 Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.219 [INFO][4022] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.225 [INFO][4022] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.225 [INFO][4022] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" host="localhost" Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.225 [INFO][4022] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:50.244917 env[1382]: 2025-08-13 01:23:50.225 [INFO][4022] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" HandleID="k8s-pod-network.aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.245790 env[1382]: 2025-08-13 01:23:50.228 [INFO][4002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeef8dd3-2401-4787-8ede-caf069e52bbf", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b95cb99-9ggd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd7e4cd8209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:50.245790 env[1382]: 2025-08-13 01:23:50.228 [INFO][4002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.245790 env[1382]: 2025-08-13 01:23:50.228 [INFO][4002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd7e4cd8209 ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.245790 env[1382]: 2025-08-13 01:23:50.232 [INFO][4002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.245790 env[1382]: 2025-08-13 01:23:50.232 [INFO][4002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeef8dd3-2401-4787-8ede-caf069e52bbf", ResourceVersion:"905", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3", Pod:"calico-apiserver-67b95cb99-9ggd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd7e4cd8209", MAC:"22:66:2f:97:01:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:50.245790 env[1382]: 2025-08-13 01:23:50.239 [INFO][4002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-9ggd9" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:23:50.247567 env[1382]: time="2025-08-13T01:23:50.247547764Z" level=info msg="CreateContainer within sandbox \"3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"be0c76201aea12b82710abef6ff24c7dcb86a49d48a03720e2a10ff01309044f\"" Aug 13 01:23:50.248419 env[1382]: time="2025-08-13T01:23:50.247941797Z" level=info msg="StartContainer for \"be0c76201aea12b82710abef6ff24c7dcb86a49d48a03720e2a10ff01309044f\"" Aug 13 01:23:50.255681 env[1382]: time="2025-08-13T01:23:50.255624716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:50.255681 env[1382]: time="2025-08-13T01:23:50.255665877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:50.255681 env[1382]: time="2025-08-13T01:23:50.255680503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:50.255811 env[1382]: time="2025-08-13T01:23:50.255775041Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3 pid=4113 runtime=io.containerd.runc.v2 Aug 13 01:23:50.279295 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:50.292593 env[1382]: time="2025-08-13T01:23:50.292560289Z" level=info msg="StartContainer for \"be0c76201aea12b82710abef6ff24c7dcb86a49d48a03720e2a10ff01309044f\" returns successfully" Aug 13 01:23:50.311373 env[1382]: time="2025-08-13T01:23:50.311341052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-9ggd9,Uid:eeef8dd3-2401-4787-8ede-caf069e52bbf,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3\"" Aug 13 01:23:50.351368 systemd[1]: run-netns-cni\x2dcd0b2627\x2d65dd\x2d8a2c\x2dd098\x2de78556c0779f.mount: Deactivated successfully. Aug 13 01:23:50.351478 systemd[1]: run-netns-cni\x2d04390af4\x2d27bf\x2d0da4\x2d7216\x2d9269b03f434a.mount: Deactivated successfully. Aug 13 01:23:50.433776 systemd-networkd[1112]: calid35276fe512: Gained IPv6LL Aug 13 01:23:50.497804 systemd-networkd[1112]: cali213d7d89350: Gained IPv6LL Aug 13 01:23:50.973036 env[1382]: time="2025-08-13T01:23:50.973006874Z" level=info msg="StopPodSandbox for \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\"" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.010 [INFO][4198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.010 [INFO][4198] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" iface="eth0" netns="/var/run/netns/cni-7aa06149-52ea-9767-2afc-36ef098d87fa" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.010 [INFO][4198] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" iface="eth0" netns="/var/run/netns/cni-7aa06149-52ea-9767-2afc-36ef098d87fa" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.010 [INFO][4198] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" iface="eth0" netns="/var/run/netns/cni-7aa06149-52ea-9767-2afc-36ef098d87fa" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.010 [INFO][4198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.010 [INFO][4198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.035 [INFO][4205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.036 [INFO][4205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.036 [INFO][4205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.040 [WARNING][4205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.040 [INFO][4205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.042 [INFO][4205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:51.044279 env[1382]: 2025-08-13 01:23:51.043 [INFO][4198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:23:51.046726 systemd[1]: run-netns-cni\x2d7aa06149\x2d52ea\x2d9767\x2d2afc\x2d36ef098d87fa.mount: Deactivated successfully. Aug 13 01:23:51.047426 env[1382]: time="2025-08-13T01:23:51.047125326Z" level=info msg="TearDown network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\" successfully" Aug 13 01:23:51.047751 env[1382]: time="2025-08-13T01:23:51.047738554Z" level=info msg="StopPodSandbox for \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\" returns successfully" Aug 13 01:23:51.048227 env[1382]: time="2025-08-13T01:23:51.048209367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d66sc,Uid:12930a0e-5b90-4155-97e2-3a62414b20c0,Namespace:calico-system,Attempt:1,}" Aug 13 01:23:51.141793 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:23:51.143038 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia936fb4ff83: link becomes ready Aug 13 01:23:51.143216 kubelet[2284]: I0813 01:23:51.142950 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gg2v2" podStartSLOduration=37.142937001 podStartE2EDuration="37.142937001s" podCreationTimestamp="2025-08-13 01:23:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:23:51.138832413 +0000 UTC m=+42.424053281" watchObservedRunningTime="2025-08-13 01:23:51.142937001 +0000 UTC m=+42.428157871" Aug 13 01:23:51.141916 systemd-networkd[1112]: calia936fb4ff83: Link UP Aug 13 01:23:51.142097 systemd-networkd[1112]: calia936fb4ff83: Gained carrier Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.074 [INFO][4212] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.084 [INFO][4212] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--d66sc-eth0 goldmane-58fd7646b9- calico-system 12930a0e-5b90-4155-97e2-3a62414b20c0 926 0 2025-08-13 01:23:26 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-d66sc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia936fb4ff83 [] [] }} ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.084 [INFO][4212] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.104 [INFO][4225] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" HandleID="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.104 [INFO][4225] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" HandleID="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5040), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-d66sc", "timestamp":"2025-08-13 01:23:51.103991506 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.104 [INFO][4225] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.104 [INFO][4225] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.104 [INFO][4225] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.108 [INFO][4225] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.112 [INFO][4225] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.115 [INFO][4225] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.116 [INFO][4225] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.117 [INFO][4225] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.117 [INFO][4225] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.118 [INFO][4225] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.120 [INFO][4225] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.124 [INFO][4225] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.124 [INFO][4225] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" host="localhost" Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.124 [INFO][4225] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:51.163813 env[1382]: 2025-08-13 01:23:51.124 [INFO][4225] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" HandleID="k8s-pod-network.4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.164327 env[1382]: 2025-08-13 01:23:51.129 [INFO][4212] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d66sc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"12930a0e-5b90-4155-97e2-3a62414b20c0", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-d66sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia936fb4ff83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:51.164327 env[1382]: 2025-08-13 01:23:51.129 [INFO][4212] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.164327 env[1382]: 2025-08-13 01:23:51.129 [INFO][4212] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia936fb4ff83 ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.164327 env[1382]: 2025-08-13 01:23:51.135 [INFO][4212] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.164327 env[1382]: 2025-08-13 01:23:51.135 [INFO][4212] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d66sc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"12930a0e-5b90-4155-97e2-3a62414b20c0", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b", Pod:"goldmane-58fd7646b9-d66sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia936fb4ff83", MAC:"8a:51:07:31:7c:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:51.164327 env[1382]: 2025-08-13 01:23:51.139 [INFO][4212] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b" Namespace="calico-system" Pod="goldmane-58fd7646b9-d66sc" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:23:51.169630 env[1382]: time="2025-08-13T01:23:51.169576936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:51.169711 env[1382]: time="2025-08-13T01:23:51.169620192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:51.169711 env[1382]: time="2025-08-13T01:23:51.169632334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:51.169936 env[1382]: time="2025-08-13T01:23:51.169802767Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b pid=4246 runtime=io.containerd.runc.v2 Aug 13 01:23:51.188441 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:51.211172 env[1382]: time="2025-08-13T01:23:51.211130144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-d66sc,Uid:12930a0e-5b90-4155-97e2-3a62414b20c0,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b\"" Aug 13 01:23:51.244000 audit[4281]: NETFILTER_CFG table=filter:103 family=2 entries=16 op=nft_register_rule pid=4281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:51.244000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffead184b30 a2=0 a3=7ffead184b1c items=0 ppid=2387 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:51.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:51.261000 audit[4281]: NETFILTER_CFG table=nat:104 family=2 entries=54 op=nft_register_chain pid=4281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:51.261000 audit[4281]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffead184b30 a2=0 a3=7ffead184b1c items=0 ppid=2387 pid=4281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:51.261000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:51.585192 kubelet[2284]: I0813 01:23:51.584822 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:23:51.613008 systemd[1]: run-containerd-runc-k8s.io-c2fb6e3a6f053a836d1da16768973f18403dcfdb533e65f376cbca3a21b03384-runc.WrU4lF.mount: Deactivated successfully. Aug 13 01:23:51.969126 env[1382]: time="2025-08-13T01:23:51.969073867Z" level=info msg="StopPodSandbox for \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\"" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.038 [INFO][4352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.038 [INFO][4352] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" iface="eth0" netns="/var/run/netns/cni-237274e2-99b5-9fcc-6e9c-0d30be5eb299" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.038 [INFO][4352] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" iface="eth0" netns="/var/run/netns/cni-237274e2-99b5-9fcc-6e9c-0d30be5eb299" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.038 [INFO][4352] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" iface="eth0" netns="/var/run/netns/cni-237274e2-99b5-9fcc-6e9c-0d30be5eb299" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.038 [INFO][4352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.038 [INFO][4352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.069 [INFO][4359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.069 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.069 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.076 [WARNING][4359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.076 [INFO][4359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.080 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:52.083418 env[1382]: 2025-08-13 01:23:52.081 [INFO][4352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:23:52.084127 env[1382]: time="2025-08-13T01:23:52.083510107Z" level=info msg="TearDown network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\" successfully" Aug 13 01:23:52.084127 env[1382]: time="2025-08-13T01:23:52.083528405Z" level=info msg="StopPodSandbox for \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\" returns successfully" Aug 13 01:23:52.084127 env[1382]: time="2025-08-13T01:23:52.083860560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-jl8kx,Uid:a621f640-c17c-4f08-9759-f53f10bbc599,Namespace:calico-apiserver,Attempt:1,}" Aug 13 01:23:52.161791 systemd-networkd[1112]: calie666ed159cc: Gained IPv6LL Aug 13 01:23:52.196181 systemd-networkd[1112]: calia51339ea172: Link UP Aug 13 01:23:52.197913 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:23:52.197954 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia51339ea172: link becomes ready Aug 13 01:23:52.198016 systemd-networkd[1112]: calia51339ea172: Gained carrier Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.110 [INFO][4365] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.121 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0 calico-apiserver-67b95cb99- calico-apiserver a621f640-c17c-4f08-9759-f53f10bbc599 944 0 2025-08-13 01:23:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67b95cb99 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67b95cb99-jl8kx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia51339ea172 [] [] }} ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.121 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.153 [INFO][4378] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" HandleID="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.153 [INFO][4378] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" HandleID="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000325f40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67b95cb99-jl8kx", "timestamp":"2025-08-13 01:23:52.153069722 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.153 [INFO][4378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.153 [INFO][4378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.153 [INFO][4378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.164 [INFO][4378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.172 [INFO][4378] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.175 [INFO][4378] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.176 [INFO][4378] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.181 [INFO][4378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.181 [INFO][4378] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.184 [INFO][4378] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49 Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.189 [INFO][4378] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.192 [INFO][4378] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.192 [INFO][4378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" host="localhost" Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.192 [INFO][4378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:52.212955 env[1382]: 2025-08-13 01:23:52.192 [INFO][4378] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" HandleID="k8s-pod-network.f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.217786 env[1382]: 2025-08-13 01:23:52.194 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"a621f640-c17c-4f08-9759-f53f10bbc599", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67b95cb99-jl8kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia51339ea172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:52.217786 env[1382]: 2025-08-13 01:23:52.194 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.217786 env[1382]: 2025-08-13 01:23:52.194 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia51339ea172 ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.217786 env[1382]: 2025-08-13 01:23:52.198 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.217786 env[1382]: 2025-08-13 01:23:52.199 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"a621f640-c17c-4f08-9759-f53f10bbc599", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49", Pod:"calico-apiserver-67b95cb99-jl8kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia51339ea172", MAC:"ee:fd:75:41:be:a4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:52.217786 env[1382]: 2025-08-13 01:23:52.210 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49" Namespace="calico-apiserver" Pod="calico-apiserver-67b95cb99-jl8kx" WorkloadEndpoint="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:23:52.252276 env[1382]: time="2025-08-13T01:23:52.252190682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:52.252480 env[1382]: time="2025-08-13T01:23:52.252218056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:52.252480 env[1382]: time="2025-08-13T01:23:52.252244688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:52.253506 env[1382]: time="2025-08-13T01:23:52.253486595Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49 pid=4395 runtime=io.containerd.runc.v2 Aug 13 01:23:52.289766 systemd-networkd[1112]: calidd7e4cd8209: Gained IPv6LL Aug 13 01:23:52.306126 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:52.331186 env[1382]: time="2025-08-13T01:23:52.331163928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67b95cb99-jl8kx,Uid:a621f640-c17c-4f08-9759-f53f10bbc599,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49\"" Aug 13 01:23:52.348551 systemd[1]: run-containerd-runc-k8s.io-c2fb6e3a6f053a836d1da16768973f18403dcfdb533e65f376cbca3a21b03384-runc.T1w2Ea.mount: Deactivated successfully. Aug 13 01:23:52.348655 systemd[1]: run-netns-cni\x2d237274e2\x2d99b5\x2d9fcc\x2d6e9c\x2d0d30be5eb299.mount: Deactivated successfully. Aug 13 01:23:52.417825 systemd-networkd[1112]: calia936fb4ff83: Gained IPv6LL Aug 13 01:23:52.710318 env[1382]: time="2025-08-13T01:23:52.710290342Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:52.714972 env[1382]: time="2025-08-13T01:23:52.714956995Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:52.715804 env[1382]: time="2025-08-13T01:23:52.715783749Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:52.716706 env[1382]: time="2025-08-13T01:23:52.716681169Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:52.717228 env[1382]: time="2025-08-13T01:23:52.717209753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 01:23:52.719256 env[1382]: time="2025-08-13T01:23:52.719242343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 01:23:52.763073 env[1382]: time="2025-08-13T01:23:52.763050463Z" level=info msg="CreateContainer within sandbox \"5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 01:23:52.771806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092172409.mount: Deactivated successfully. Aug 13 01:23:52.773630 env[1382]: time="2025-08-13T01:23:52.773608973Z" level=info msg="CreateContainer within sandbox \"5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"85c29422784ec8422d9da6972a969227e31b3e95349c18e67435bf9f778b75ce\"" Aug 13 01:23:52.776882 env[1382]: time="2025-08-13T01:23:52.776867708Z" level=info msg="StartContainer for \"85c29422784ec8422d9da6972a969227e31b3e95349c18e67435bf9f778b75ce\"" Aug 13 01:23:52.829545 env[1382]: time="2025-08-13T01:23:52.829489814Z" level=info msg="StartContainer for \"85c29422784ec8422d9da6972a969227e31b3e95349c18e67435bf9f778b75ce\" returns successfully" Aug 13 01:23:53.365345 env[1382]: time="2025-08-13T01:23:53.365296356Z" level=info msg="StopPodSandbox for \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\"" Aug 13 01:23:53.416627 systemd[1]: run-containerd-runc-k8s.io-85c29422784ec8422d9da6972a969227e31b3e95349c18e67435bf9f778b75ce-runc.EwmHbi.mount: Deactivated successfully. Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.399 [INFO][4507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.399 [INFO][4507] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" iface="eth0" netns="/var/run/netns/cni-67501fd9-53fc-79ba-a98d-1af40f494197" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.399 [INFO][4507] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" iface="eth0" netns="/var/run/netns/cni-67501fd9-53fc-79ba-a98d-1af40f494197" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.399 [INFO][4507] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" iface="eth0" netns="/var/run/netns/cni-67501fd9-53fc-79ba-a98d-1af40f494197" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.399 [INFO][4507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.399 [INFO][4507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.426 [INFO][4515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.426 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.426 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.430 [WARNING][4515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.430 [INFO][4515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.431 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:53.440752 env[1382]: 2025-08-13 01:23:53.434 [INFO][4507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:23:53.524928 env[1382]: time="2025-08-13T01:23:53.445424154Z" level=info msg="TearDown network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\" successfully" Aug 13 01:23:53.524928 env[1382]: time="2025-08-13T01:23:53.445445891Z" level=info msg="StopPodSandbox for \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\" returns successfully" Aug 13 01:23:53.524928 env[1382]: time="2025-08-13T01:23:53.445978411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kfxn,Uid:24914c97-a643-4e2a-b954-9959ef2f43e1,Namespace:calico-system,Attempt:1,}" Aug 13 01:23:53.442737 systemd[1]: run-netns-cni\x2d67501fd9\x2d53fc\x2d79ba\x2da98d\x2d1af40f494197.mount: Deactivated successfully. Aug 13 01:23:53.505856 systemd-networkd[1112]: calia51339ea172: Gained IPv6LL Aug 13 01:23:53.559163 kubelet[2284]: I0813 01:23:53.559118 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65f7979575-9d4tp" podStartSLOduration=23.025327718 podStartE2EDuration="26.554925989s" podCreationTimestamp="2025-08-13 01:23:27 +0000 UTC" firstStartedPulling="2025-08-13 01:23:49.188404551 +0000 UTC m=+40.473625410" lastFinishedPulling="2025-08-13 01:23:52.718002821 +0000 UTC m=+44.003223681" observedRunningTime="2025-08-13 01:23:53.525683343 +0000 UTC m=+44.810904218" watchObservedRunningTime="2025-08-13 01:23:53.554925989 +0000 UTC m=+44.840147013" Aug 13 01:23:53.755462 systemd-networkd[1112]: caliab34fc401ac: Link UP Aug 13 01:23:53.759336 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Aug 13 01:23:53.759402 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliab34fc401ac: link becomes ready Aug 13 01:23:53.759390 systemd-networkd[1112]: caliab34fc401ac: Gained carrier Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.566 [INFO][4538] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.574 [INFO][4538] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7kfxn-eth0 csi-node-driver- calico-system 24914c97-a643-4e2a-b954-9959ef2f43e1 956 0 2025-08-13 01:23:26 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7kfxn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliab34fc401ac [] [] }} ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.574 [INFO][4538] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.605 [INFO][4556] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" HandleID="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.605 [INFO][4556] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" HandleID="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d51d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7kfxn", "timestamp":"2025-08-13 01:23:53.605767129 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.605 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.605 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.606 [INFO][4556] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.617 [INFO][4556] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.619 [INFO][4556] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.622 [INFO][4556] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.622 [INFO][4556] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.623 [INFO][4556] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.624 [INFO][4556] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.717 [INFO][4556] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.741 [INFO][4556] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.750 [INFO][4556] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.750 [INFO][4556] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" host="localhost" Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.750 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:23:53.770173 env[1382]: 2025-08-13 01:23:53.750 [INFO][4556] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" HandleID="k8s-pod-network.5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.772290 env[1382]: 2025-08-13 01:23:53.752 [INFO][4538] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kfxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24914c97-a643-4e2a-b954-9959ef2f43e1", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7kfxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab34fc401ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:53.772290 env[1382]: 2025-08-13 01:23:53.752 [INFO][4538] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.772290 env[1382]: 2025-08-13 01:23:53.752 [INFO][4538] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab34fc401ac ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.772290 env[1382]: 2025-08-13 01:23:53.759 [INFO][4538] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.772290 env[1382]: 2025-08-13 01:23:53.761 [INFO][4538] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kfxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24914c97-a643-4e2a-b954-9959ef2f43e1", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e", Pod:"csi-node-driver-7kfxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab34fc401ac", MAC:"5e:5b:d7:20:99:a5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:23:53.772290 env[1382]: 2025-08-13 01:23:53.768 [INFO][4538] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e" Namespace="calico-system" Pod="csi-node-driver-7kfxn" WorkloadEndpoint="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:23:53.781361 env[1382]: time="2025-08-13T01:23:53.781260941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 01:23:53.781361 env[1382]: time="2025-08-13T01:23:53.781290472Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 01:23:53.781361 env[1382]: time="2025-08-13T01:23:53.781297678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 01:23:53.782763 env[1382]: time="2025-08-13T01:23:53.781519177Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e pid=4573 runtime=io.containerd.runc.v2 Aug 13 01:23:53.807617 systemd-resolved[1277]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 13 01:23:53.823123 env[1382]: time="2025-08-13T01:23:53.823094616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kfxn,Uid:24914c97-a643-4e2a-b954-9959ef2f43e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e\"" Aug 13 01:23:54.913801 systemd-networkd[1112]: caliab34fc401ac: Gained IPv6LL Aug 13 01:23:55.402161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount381023656.mount: Deactivated successfully. Aug 13 01:23:55.449731 env[1382]: time="2025-08-13T01:23:55.449681031Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:55.452485 env[1382]: time="2025-08-13T01:23:55.452467485Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:55.453843 env[1382]: time="2025-08-13T01:23:55.453829236Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:55.454631 env[1382]: time="2025-08-13T01:23:55.454617405Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:55.455056 env[1382]: time="2025-08-13T01:23:55.455040001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 01:23:55.457353 env[1382]: time="2025-08-13T01:23:55.457338978Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 01:23:55.457753 env[1382]: time="2025-08-13T01:23:55.457733601Z" level=info msg="CreateContainer within sandbox \"bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 01:23:55.468552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount360034547.mount: Deactivated successfully. Aug 13 01:23:55.473253 env[1382]: time="2025-08-13T01:23:55.473229956Z" level=info msg="CreateContainer within sandbox \"bdeff75a0e72e9003fc01edf36908e3a9a8b17e604a397f659196bdefe097421\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"951eb5a5d3a2d7bf06d4bb8dfb4ad30eec0db77e76a3eea17cf7b1cb0786257b\"" Aug 13 01:23:55.474332 env[1382]: time="2025-08-13T01:23:55.474315192Z" level=info msg="StartContainer for \"951eb5a5d3a2d7bf06d4bb8dfb4ad30eec0db77e76a3eea17cf7b1cb0786257b\"" Aug 13 01:23:55.491418 kubelet[2284]: I0813 01:23:55.491402 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:23:55.578887 env[1382]: time="2025-08-13T01:23:55.578859991Z" level=info msg="StartContainer for \"951eb5a5d3a2d7bf06d4bb8dfb4ad30eec0db77e76a3eea17cf7b1cb0786257b\" returns successfully" Aug 13 01:23:55.758718 kernel: kauditd_printk_skb: 43 callbacks suppressed Aug 13 01:23:55.759817 kernel: audit: type=1325 audit(1755048235.754:311): table=filter:105 family=2 entries=16 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:55.763234 kernel: audit: type=1300 audit(1755048235.754:311): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc32f57e50 a2=0 a3=7ffc32f57e3c items=0 ppid=2387 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:55.754000 audit[4682]: NETFILTER_CFG table=filter:105 family=2 entries=16 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:55.766923 kernel: audit: type=1327 audit(1755048235.754:311): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:55.767511 kernel: audit: type=1325 audit(1755048235.764:312): table=nat:106 family=2 entries=18 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:55.754000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc32f57e50 a2=0 a3=7ffc32f57e3c items=0 ppid=2387 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:55.754000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:55.764000 audit[4682]: NETFILTER_CFG table=nat:106 family=2 entries=18 op=nft_register_rule pid=4682 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:55.764000 audit[4682]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffc32f57e50 a2=0 a3=0 items=0 ppid=2387 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:55.774186 kernel: audit: type=1300 audit(1755048235.764:312): arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffc32f57e50 a2=0 a3=0 items=0 ppid=2387 pid=4682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:55.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:55.776309 kernel: audit: type=1327 audit(1755048235.764:312): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:55.780000 audit[4684]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:55.780000 audit[4684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcdb104b60 a2=0 a3=7ffcdb104b4c items=0 ppid=2387 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:55.789598 kernel: audit: type=1325 audit(1755048235.780:313): table=filter:107 family=2 entries=16 op=nft_register_rule pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:55.789636 kernel: audit: type=1300 audit(1755048235.780:313): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffcdb104b60 a2=0 a3=7ffcdb104b4c items=0 ppid=2387 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:55.789675 kernel: audit: type=1327 audit(1755048235.780:313): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:55.780000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:55.790000 audit[4684]: NETFILTER_CFG table=nat:108 family=2 entries=18 op=nft_register_rule pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:55.790000 audit[4684]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffcdb104b60 a2=0 a3=0 items=0 ppid=2387 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:55.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:55.794702 kernel: audit: type=1325 audit(1755048235.790:314): table=nat:108 family=2 entries=18 op=nft_register_rule pid=4684 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:56.801000 audit[4710]: NETFILTER_CFG table=filter:109 family=2 entries=14 op=nft_register_rule pid=4710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:56.801000 audit[4710]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff2d8fc100 a2=0 a3=7fff2d8fc0ec items=0 ppid=2387 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.801000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:56.808000 audit[4710]: NETFILTER_CFG table=nat:110 family=2 entries=32 op=nft_register_chain pid=4710 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:56.808000 audit[4710]: SYSCALL arch=c000003e syscall=46 success=yes exit=12156 a0=3 a1=7fff2d8fc100 a2=0 a3=7fff2d8fc0ec items=0 ppid=2387 pid=4710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.808000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.930000 audit: BPF prog-id=10 op=LOAD Aug 13 01:23:56.930000 audit[4722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb8b24520 a2=98 a3=1fffffffffffffff items=0 ppid=4704 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.930000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:23:56.930000 audit: BPF prog-id=10 op=UNLOAD Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit: BPF prog-id=11 op=LOAD Aug 13 01:23:56.931000 audit[4722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb8b24400 a2=94 a3=3 items=0 ppid=4704 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.931000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:23:56.931000 audit: BPF prog-id=11 op=UNLOAD Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { bpf } for pid=4722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit: BPF prog-id=12 op=LOAD Aug 13 01:23:56.931000 audit[4722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffb8b24440 a2=94 a3=7fffb8b24620 items=0 ppid=4704 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.931000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:23:56.931000 audit: BPF prog-id=12 op=UNLOAD Aug 13 01:23:56.931000 audit[4722]: AVC avc: denied { perfmon } for pid=4722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.931000 audit[4722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7fffb8b24510 a2=50 a3=a000000085 items=0 ppid=4704 pid=4722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.931000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit: BPF prog-id=13 op=LOAD Aug 13 01:23:56.935000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff28994010 a2=98 a3=3 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.935000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:56.935000 audit: BPF prog-id=13 op=UNLOAD Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit: BPF prog-id=14 op=LOAD Aug 13 01:23:56.935000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff28993e00 a2=94 a3=54428f items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.935000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:56.935000 audit: BPF prog-id=14 op=UNLOAD Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:56.935000 audit: BPF prog-id=15 op=LOAD Aug 13 01:23:56.935000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff28993e30 a2=94 a3=2 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:56.935000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:56.935000 audit: BPF prog-id=15 op=UNLOAD Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit: BPF prog-id=16 op=LOAD Aug 13 01:23:57.012000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fff28993cf0 a2=94 a3=1 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.012000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.012000 audit: BPF prog-id=16 op=UNLOAD Aug 13 01:23:57.012000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.012000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fff28993dc0 a2=50 a3=7fff28993ea0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.012000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff28993d00 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff28993d30 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff28993c40 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff28993d50 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff28993d30 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff28993d20 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff28993d50 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff28993d30 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff28993d50 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff28993d20 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fff28993d90 a2=28 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff28993b40 a2=50 a3=1 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit: BPF prog-id=17 op=LOAD Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff28993b40 a2=94 a3=5 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit: BPF prog-id=17 op=UNLOAD Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fff28993bf0 a2=50 a3=1 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fff28993d10 a2=4 a3=38 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.019000 audit[4723]: AVC avc: denied { confidentiality } for pid=4723 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:23:57.019000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff28993d60 a2=94 a3=6 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.019000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { confidentiality } for pid=4723 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:23:57.020000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff28993510 a2=94 a3=88 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.020000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { perfmon } for pid=4723 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { bpf } for pid=4723 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.020000 audit[4723]: AVC avc: denied { confidentiality } for pid=4723 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:23:57.020000 audit[4723]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fff28993510 a2=94 a3=88 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.020000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit: BPF prog-id=18 op=LOAD Aug 13 01:23:57.051000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef864c310 a2=98 a3=1999999999999999 items=0 ppid=4704 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.051000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 01:23:57.051000 audit: BPF prog-id=18 op=UNLOAD Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit: BPF prog-id=19 op=LOAD Aug 13 01:23:57.051000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef864c1f0 a2=94 a3=ffff items=0 ppid=4704 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.051000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 01:23:57.051000 audit: BPF prog-id=19 op=UNLOAD Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { perfmon } for pid=4726 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit[4726]: AVC avc: denied { bpf } for pid=4726 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.051000 audit: BPF prog-id=20 op=LOAD Aug 13 01:23:57.051000 audit[4726]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffef864c230 a2=94 a3=7ffef864c410 items=0 ppid=4704 pid=4726 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.051000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Aug 13 01:23:57.051000 audit: BPF prog-id=20 op=UNLOAD Aug 13 01:23:57.333127 systemd-networkd[1112]: vxlan.calico: Link UP Aug 13 01:23:57.333131 systemd-networkd[1112]: vxlan.calico: Gained carrier Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit: BPF prog-id=21 op=LOAD Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb3e8f7e0 a2=98 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit: BPF prog-id=21 op=UNLOAD Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit: BPF prog-id=22 op=LOAD Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb3e8f5f0 a2=94 a3=54428f items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit: BPF prog-id=22 op=UNLOAD Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit: BPF prog-id=23 op=LOAD Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeb3e8f620 a2=94 a3=2 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit: BPF prog-id=23 op=UNLOAD Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb3e8f4f0 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb3e8f520 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb3e8f430 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb3e8f540 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb3e8f520 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb3e8f510 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb3e8f540 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb3e8f520 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb3e8f540 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.346000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.346000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffeb3e8f510 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.346000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffeb3e8f580 a2=28 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.347000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit: BPF prog-id=24 op=LOAD Aug 13 01:23:57.347000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb3e8f3f0 a2=94 a3=0 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.347000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.347000 audit: BPF prog-id=24 op=UNLOAD Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffeb3e8f3e0 a2=50 a3=2800 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.347000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffeb3e8f3e0 a2=50 a3=2800 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.347000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit: BPF prog-id=25 op=LOAD Aug 13 01:23:57.347000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb3e8ec00 a2=94 a3=2 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.347000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.347000 audit: BPF prog-id=25 op=UNLOAD Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { perfmon } for pid=4760 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit[4760]: AVC avc: denied { bpf } for pid=4760 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.347000 audit: BPF prog-id=26 op=LOAD Aug 13 01:23:57.347000 audit[4760]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffeb3e8ed00 a2=94 a3=30 items=0 ppid=4704 pid=4760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.347000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit: BPF prog-id=27 op=LOAD Aug 13 01:23:57.350000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffbf133d70 a2=98 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.350000 audit: BPF prog-id=27 op=UNLOAD Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit: BPF prog-id=28 op=LOAD Aug 13 01:23:57.350000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffbf133b60 a2=94 a3=54428f items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.350000 audit: BPF prog-id=28 op=UNLOAD Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.350000 audit: BPF prog-id=29 op=LOAD Aug 13 01:23:57.350000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffbf133b90 a2=94 a3=2 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.350000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.350000 audit: BPF prog-id=29 op=UNLOAD Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit: BPF prog-id=30 op=LOAD Aug 13 01:23:57.436000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffbf133a50 a2=94 a3=1 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.436000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.436000 audit: BPF prog-id=30 op=UNLOAD Aug 13 01:23:57.436000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.436000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffbf133b20 a2=50 a3=7fffbf133c00 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.436000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbf133a60 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbf133a90 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbf1339a0 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbf133ab0 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbf133a90 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbf133a80 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbf133ab0 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbf133a90 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbf133ab0 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffbf133a80 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffbf133af0 a2=28 a3=0 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffbf1338a0 a2=50 a3=1 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit: BPF prog-id=31 op=LOAD Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffbf1338a0 a2=94 a3=5 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit: BPF prog-id=31 op=UNLOAD Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffbf133950 a2=50 a3=1 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.447000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.447000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffbf133a70 a2=4 a3=38 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.447000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { confidentiality } for pid=4763 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:23:57.449000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffbf133ac0 a2=94 a3=6 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { confidentiality } for pid=4763 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:23:57.449000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffbf133270 a2=94 a3=88 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { perfmon } for pid=4763 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { confidentiality } for pid=4763 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Aug 13 01:23:57.449000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffbf133270 a2=94 a3=88 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbf134ca0 a2=10 a3=f8f00800 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbf134b40 a2=10 a3=3 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbf134ae0 a2=10 a3=3 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.449000 audit[4763]: AVC avc: denied { bpf } for pid=4763 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Aug 13 01:23:57.449000 audit[4763]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7fffbf134ae0 a2=10 a3=7 items=0 ppid=4704 pid=4763 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.449000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Aug 13 01:23:57.454000 audit: BPF prog-id=26 op=UNLOAD Aug 13 01:23:57.670000 audit[4825]: NETFILTER_CFG table=mangle:111 family=2 entries=16 op=nft_register_chain pid=4825 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:23:57.670000 audit[4825]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffcb8875350 a2=0 a3=7ffcb887533c items=0 ppid=4704 pid=4825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.670000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:23:57.694000 audit[4834]: NETFILTER_CFG table=nat:112 family=2 entries=15 op=nft_register_chain pid=4834 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:23:57.694000 audit[4834]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7fff82743e70 a2=0 a3=7fff82743e5c items=0 ppid=4704 pid=4834 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.694000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:23:57.700000 audit[4824]: NETFILTER_CFG table=raw:113 family=2 entries=21 op=nft_register_chain pid=4824 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:23:57.700000 audit[4824]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7ffe8acf6130 a2=0 a3=7ffe8acf611c items=0 ppid=4704 pid=4824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.700000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:23:57.709000 audit[4830]: NETFILTER_CFG table=filter:114 family=2 entries=327 op=nft_register_chain pid=4830 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Aug 13 01:23:57.709000 audit[4830]: SYSCALL arch=c000003e syscall=46 success=yes exit=193472 a0=3 a1=7ffea07f1ea0 a2=0 a3=7ffea07f1e8c items=0 ppid=4704 pid=4830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:57.709000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Aug 13 01:23:58.704724 env[1382]: time="2025-08-13T01:23:58.704665503Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:58.710313 env[1382]: time="2025-08-13T01:23:58.707112390Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:58.710313 env[1382]: time="2025-08-13T01:23:58.707972651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:58.710313 env[1382]: time="2025-08-13T01:23:58.709586522Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:23:58.710313 env[1382]: time="2025-08-13T01:23:58.709963404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 01:23:58.738532 env[1382]: time="2025-08-13T01:23:58.738510486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 01:23:58.768323 env[1382]: time="2025-08-13T01:23:58.768247948Z" level=info msg="CreateContainer within sandbox \"aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 01:23:58.780458 env[1382]: time="2025-08-13T01:23:58.780388139Z" level=info msg="CreateContainer within sandbox \"aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"368c9a03364d6ba6939a5149cf357514683325c718d834fe4673acddd42c875f\"" Aug 13 01:23:58.782021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3069567615.mount: Deactivated successfully. Aug 13 01:23:58.789145 env[1382]: time="2025-08-13T01:23:58.788444431Z" level=info msg="StartContainer for \"368c9a03364d6ba6939a5149cf357514683325c718d834fe4673acddd42c875f\"" Aug 13 01:23:58.859382 env[1382]: time="2025-08-13T01:23:58.859353371Z" level=info msg="StartContainer for \"368c9a03364d6ba6939a5149cf357514683325c718d834fe4673acddd42c875f\" returns successfully" Aug 13 01:23:58.883122 systemd-networkd[1112]: vxlan.calico: Gained IPv6LL Aug 13 01:23:59.689736 kubelet[2284]: I0813 01:23:59.667071 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67b95cb99-9ggd9" podStartSLOduration=28.223078356 podStartE2EDuration="36.639145037s" podCreationTimestamp="2025-08-13 01:23:23 +0000 UTC" firstStartedPulling="2025-08-13 01:23:50.31206516 +0000 UTC m=+41.597286019" lastFinishedPulling="2025-08-13 01:23:58.728131839 +0000 UTC m=+50.013352700" observedRunningTime="2025-08-13 01:23:59.63600701 +0000 UTC m=+50.921227878" watchObservedRunningTime="2025-08-13 01:23:59.639145037 +0000 UTC m=+50.924365904" Aug 13 01:23:59.715180 kubelet[2284]: I0813 01:23:59.690108 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5f9975b665-sq7zc" podStartSLOduration=5.112944541 podStartE2EDuration="12.690093023s" podCreationTimestamp="2025-08-13 01:23:47 +0000 UTC" firstStartedPulling="2025-08-13 01:23:47.878551136 +0000 UTC m=+39.163771995" lastFinishedPulling="2025-08-13 01:23:55.455699616 +0000 UTC m=+46.740920477" observedRunningTime="2025-08-13 01:23:56.524529216 +0000 UTC m=+47.809750083" watchObservedRunningTime="2025-08-13 01:23:59.690093023 +0000 UTC m=+50.975313892" Aug 13 01:23:59.774042 systemd[1]: run-containerd-runc-k8s.io-368c9a03364d6ba6939a5149cf357514683325c718d834fe4673acddd42c875f-runc.FEZy2A.mount: Deactivated successfully. Aug 13 01:23:59.824000 audit[4899]: NETFILTER_CFG table=filter:115 family=2 entries=12 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:59.824000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffdd33eea70 a2=0 a3=7ffdd33eea5c items=0 ppid=2387 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:59.824000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:23:59.830000 audit[4899]: NETFILTER_CFG table=nat:116 family=2 entries=22 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:23:59.830000 audit[4899]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdd33eea70 a2=0 a3=7ffdd33eea5c items=0 ppid=2387 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:23:59.830000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:00.630902 kubelet[2284]: I0813 01:24:00.630882 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:24:01.423637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3531503736.mount: Deactivated successfully. Aug 13 01:24:02.191366 env[1382]: time="2025-08-13T01:24:02.191329171Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.195330 env[1382]: time="2025-08-13T01:24:02.192776918Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.195330 env[1382]: time="2025-08-13T01:24:02.193962085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.195330 env[1382]: time="2025-08-13T01:24:02.195111462Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.197278 env[1382]: time="2025-08-13T01:24:02.195560845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 01:24:02.217927 env[1382]: time="2025-08-13T01:24:02.217895665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 01:24:02.296119 env[1382]: time="2025-08-13T01:24:02.296086030Z" level=info msg="CreateContainer within sandbox \"4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 01:24:02.306192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029674687.mount: Deactivated successfully. Aug 13 01:24:02.311468 env[1382]: time="2025-08-13T01:24:02.311411317Z" level=info msg="CreateContainer within sandbox \"4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"332af55ca467e9a4d872eb9cf8c615c87c81d0c68cd5256c15dcbe20daf3a3fc\"" Aug 13 01:24:02.336817 env[1382]: time="2025-08-13T01:24:02.336619762Z" level=info msg="StartContainer for \"332af55ca467e9a4d872eb9cf8c615c87c81d0c68cd5256c15dcbe20daf3a3fc\"" Aug 13 01:24:02.413924 env[1382]: time="2025-08-13T01:24:02.413898448Z" level=info msg="StartContainer for \"332af55ca467e9a4d872eb9cf8c615c87c81d0c68cd5256c15dcbe20daf3a3fc\" returns successfully" Aug 13 01:24:02.644839 env[1382]: time="2025-08-13T01:24:02.644810085Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.646325 env[1382]: time="2025-08-13T01:24:02.645999691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.649642 env[1382]: time="2025-08-13T01:24:02.649382244Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.650172 env[1382]: time="2025-08-13T01:24:02.650154944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:02.652049 env[1382]: time="2025-08-13T01:24:02.652028497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 01:24:02.654783 env[1382]: time="2025-08-13T01:24:02.654764293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:24:02.656099 env[1382]: time="2025-08-13T01:24:02.656074303Z" level=info msg="CreateContainer within sandbox \"f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 01:24:02.675359 env[1382]: time="2025-08-13T01:24:02.675333884Z" level=info msg="CreateContainer within sandbox \"f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fb82888147528189845c223aa4b0a9d18283833144a53125ffa7785792d47336\"" Aug 13 01:24:02.677381 env[1382]: time="2025-08-13T01:24:02.676551475Z" level=info msg="StartContainer for \"fb82888147528189845c223aa4b0a9d18283833144a53125ffa7785792d47336\"" Aug 13 01:24:02.758819 env[1382]: time="2025-08-13T01:24:02.758643982Z" level=info msg="StartContainer for \"fb82888147528189845c223aa4b0a9d18283833144a53125ffa7785792d47336\" returns successfully" Aug 13 01:24:02.986069 kubelet[2284]: I0813 01:24:02.953822 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-d66sc" podStartSLOduration=25.932854448 podStartE2EDuration="36.925674083s" podCreationTimestamp="2025-08-13 01:23:26 +0000 UTC" firstStartedPulling="2025-08-13 01:23:51.211939425 +0000 UTC m=+42.497160281" lastFinishedPulling="2025-08-13 01:24:02.204759051 +0000 UTC m=+53.489979916" observedRunningTime="2025-08-13 01:24:02.844071978 +0000 UTC m=+54.129292846" watchObservedRunningTime="2025-08-13 01:24:02.925674083 +0000 UTC m=+54.210894944" Aug 13 01:24:03.207000 audit[4997]: NETFILTER_CFG table=filter:117 family=2 entries=12 op=nft_register_rule pid=4997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.258132 kernel: kauditd_printk_skb: 536 callbacks suppressed Aug 13 01:24:03.266006 kernel: audit: type=1325 audit(1755048243.207:421): table=filter:117 family=2 entries=12 op=nft_register_rule pid=4997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.267677 kernel: audit: type=1300 audit(1755048243.207:421): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffdff692e20 a2=0 a3=7ffdff692e0c items=0 ppid=2387 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:03.267726 kernel: audit: type=1327 audit(1755048243.207:421): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:03.268417 kernel: audit: type=1325 audit(1755048243.220:422): table=nat:118 family=2 entries=22 op=nft_register_rule pid=4997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.268448 kernel: audit: type=1300 audit(1755048243.220:422): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdff692e20 a2=0 a3=7ffdff692e0c items=0 ppid=2387 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:03.268471 kernel: audit: type=1327 audit(1755048243.220:422): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:03.207000 audit[4997]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffdff692e20 a2=0 a3=7ffdff692e0c items=0 ppid=2387 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:03.207000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:03.220000 audit[4997]: NETFILTER_CFG table=nat:118 family=2 entries=22 op=nft_register_rule pid=4997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.220000 audit[4997]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdff692e20 a2=0 a3=7ffdff692e0c items=0 ppid=2387 pid=4997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:03.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:03.693889 systemd[1]: run-containerd-runc-k8s.io-332af55ca467e9a4d872eb9cf8c615c87c81d0c68cd5256c15dcbe20daf3a3fc-runc.HFDywy.mount: Deactivated successfully. Aug 13 01:24:03.730596 kernel: audit: type=1325 audit(1755048243.718:423): table=filter:119 family=2 entries=12 op=nft_register_rule pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.731330 kernel: audit: type=1300 audit(1755048243.718:423): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc1e038a10 a2=0 a3=7ffc1e0389fc items=0 ppid=2387 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:03.731359 kernel: audit: type=1327 audit(1755048243.718:423): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:03.733732 kernel: audit: type=1325 audit(1755048243.730:424): table=nat:120 family=2 entries=22 op=nft_register_rule pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.718000 audit[5019]: NETFILTER_CFG table=filter:119 family=2 entries=12 op=nft_register_rule pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.718000 audit[5019]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc1e038a10 a2=0 a3=7ffc1e0389fc items=0 ppid=2387 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:03.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:03.730000 audit[5019]: NETFILTER_CFG table=nat:120 family=2 entries=22 op=nft_register_rule pid=5019 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:03.730000 audit[5019]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffc1e038a10 a2=0 a3=7ffc1e0389fc items=0 ppid=2387 pid=5019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:03.730000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:03.740093 kubelet[2284]: I0813 01:24:03.704232 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67b95cb99-jl8kx" podStartSLOduration=30.444580855 podStartE2EDuration="40.704215142s" podCreationTimestamp="2025-08-13 01:23:23 +0000 UTC" firstStartedPulling="2025-08-13 01:23:52.393467113 +0000 UTC m=+43.678687968" lastFinishedPulling="2025-08-13 01:24:02.65310139 +0000 UTC m=+53.938322255" observedRunningTime="2025-08-13 01:24:03.685318989 +0000 UTC m=+54.970539858" watchObservedRunningTime="2025-08-13 01:24:03.704215142 +0000 UTC m=+54.989436005" Aug 13 01:24:04.201191 env[1382]: time="2025-08-13T01:24:04.201151097Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:04.206227 env[1382]: time="2025-08-13T01:24:04.205047045Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:04.206290 env[1382]: time="2025-08-13T01:24:04.206274799Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:04.207738 env[1382]: time="2025-08-13T01:24:04.207720564Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:04.208107 env[1382]: time="2025-08-13T01:24:04.208075799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:24:04.228238 env[1382]: time="2025-08-13T01:24:04.228209552Z" level=info msg="CreateContainer within sandbox \"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:24:04.237950 env[1382]: time="2025-08-13T01:24:04.237923400Z" level=info msg="CreateContainer within sandbox \"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ecefe3777b128242e015abfba227bc82a8d8e6782ae0883b0a4bd5387763bac2\"" Aug 13 01:24:04.238945 env[1382]: time="2025-08-13T01:24:04.238920482Z" level=info msg="StartContainer for \"ecefe3777b128242e015abfba227bc82a8d8e6782ae0883b0a4bd5387763bac2\"" Aug 13 01:24:04.297355 env[1382]: time="2025-08-13T01:24:04.295207593Z" level=info msg="StartContainer for \"ecefe3777b128242e015abfba227bc82a8d8e6782ae0883b0a4bd5387763bac2\" returns successfully" Aug 13 01:24:04.303555 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount703486728.mount: Deactivated successfully. Aug 13 01:24:04.305367 env[1382]: time="2025-08-13T01:24:04.305350639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:24:04.703998 systemd[1]: run-containerd-runc-k8s.io-332af55ca467e9a4d872eb9cf8c615c87c81d0c68cd5256c15dcbe20daf3a3fc-runc.NTUCnA.mount: Deactivated successfully. Aug 13 01:24:04.802000 audit[5080]: NETFILTER_CFG table=filter:121 family=2 entries=11 op=nft_register_rule pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:04.802000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff6f182f90 a2=0 a3=7fff6f182f7c items=0 ppid=2387 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:04.802000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:04.805000 audit[5080]: NETFILTER_CFG table=nat:122 family=2 entries=29 op=nft_register_chain pid=5080 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:04.805000 audit[5080]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff6f182f90 a2=0 a3=7fff6f182f7c items=0 ppid=2387 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:04.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:06.977484 env[1382]: time="2025-08-13T01:24:06.977449434Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:07.012130 env[1382]: time="2025-08-13T01:24:06.997865139Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:07.012130 env[1382]: time="2025-08-13T01:24:07.003737029Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:07.012130 env[1382]: time="2025-08-13T01:24:07.007386044Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Aug 13 01:24:07.012130 env[1382]: time="2025-08-13T01:24:07.007819644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:24:07.044997 env[1382]: time="2025-08-13T01:24:07.044967411Z" level=info msg="CreateContainer within sandbox \"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:24:07.054770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3072128766.mount: Deactivated successfully. Aug 13 01:24:07.060869 env[1382]: time="2025-08-13T01:24:07.060849809Z" level=info msg="CreateContainer within sandbox \"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e590c9700826c520e85fa0d7fae0e4f5d58aaa4ca1c1d3e961437902eccf23bd\"" Aug 13 01:24:07.061260 env[1382]: time="2025-08-13T01:24:07.061179983Z" level=info msg="StartContainer for \"e590c9700826c520e85fa0d7fae0e4f5d58aaa4ca1c1d3e961437902eccf23bd\"" Aug 13 01:24:07.097444 systemd[1]: run-containerd-runc-k8s.io-e590c9700826c520e85fa0d7fae0e4f5d58aaa4ca1c1d3e961437902eccf23bd-runc.NHaoTN.mount: Deactivated successfully. Aug 13 01:24:07.131380 env[1382]: time="2025-08-13T01:24:07.131342378Z" level=info msg="StartContainer for \"e590c9700826c520e85fa0d7fae0e4f5d58aaa4ca1c1d3e961437902eccf23bd\" returns successfully" Aug 13 01:24:07.896721 kubelet[2284]: I0813 01:24:07.886714 2284 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7kfxn" podStartSLOduration=28.68310786 podStartE2EDuration="41.87695493s" podCreationTimestamp="2025-08-13 01:23:26 +0000 UTC" firstStartedPulling="2025-08-13 01:23:53.82564062 +0000 UTC m=+45.110861475" lastFinishedPulling="2025-08-13 01:24:07.019487681 +0000 UTC m=+58.304708545" observedRunningTime="2025-08-13 01:24:07.845024119 +0000 UTC m=+59.130244985" watchObservedRunningTime="2025-08-13 01:24:07.87695493 +0000 UTC m=+59.162175791" Aug 13 01:24:08.248674 kubelet[2284]: I0813 01:24:08.246262 2284 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:24:08.252877 kubelet[2284]: I0813 01:24:08.252866 2284 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:24:08.953743 env[1382]: time="2025-08-13T01:24:08.953712244Z" level=info msg="StopPodSandbox for \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\"" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.308 [WARNING][5134] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"a621f640-c17c-4f08-9759-f53f10bbc599", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49", Pod:"calico-apiserver-67b95cb99-jl8kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia51339ea172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.311 [INFO][5134] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.311 [INFO][5134] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" iface="eth0" netns="" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.311 [INFO][5134] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.311 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.607 [INFO][5143] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.611 [INFO][5143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.612 [INFO][5143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.636 [WARNING][5143] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.636 [INFO][5143] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.636 [INFO][5143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:09.645193 env[1382]: 2025-08-13 01:24:09.642 [INFO][5134] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.665811 env[1382]: time="2025-08-13T01:24:09.645216796Z" level=info msg="TearDown network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\" successfully" Aug 13 01:24:09.665811 env[1382]: time="2025-08-13T01:24:09.645255082Z" level=info msg="StopPodSandbox for \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\" returns successfully" Aug 13 01:24:09.684606 env[1382]: time="2025-08-13T01:24:09.684577270Z" level=info msg="RemovePodSandbox for \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\"" Aug 13 01:24:09.684869 env[1382]: time="2025-08-13T01:24:09.684842126Z" level=info msg="Forcibly stopping sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\"" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.793 [WARNING][5173] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"a621f640-c17c-4f08-9759-f53f10bbc599", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f0295e112e3db2f8f19707b32cce18b5dd4fc785305ae232d584fa97223b8a49", Pod:"calico-apiserver-67b95cb99-jl8kx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia51339ea172", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.793 [INFO][5173] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.793 [INFO][5173] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" iface="eth0" netns="" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.793 [INFO][5173] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.793 [INFO][5173] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.958 [INFO][5184] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.959 [INFO][5184] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.959 [INFO][5184] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.967 [WARNING][5184] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.967 [INFO][5184] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" HandleID="k8s-pod-network.0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Workload="localhost-k8s-calico--apiserver--67b95cb99--jl8kx-eth0" Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.968 [INFO][5184] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:09.972022 env[1382]: 2025-08-13 01:24:09.969 [INFO][5173] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5" Aug 13 01:24:09.972022 env[1382]: time="2025-08-13T01:24:09.971263280Z" level=info msg="TearDown network for sandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\" successfully" Aug 13 01:24:09.982950 env[1382]: time="2025-08-13T01:24:09.982929905Z" level=info msg="RemovePodSandbox \"0230e598b99dff8ef34ad8ea74efd4bf7a1290c404420b6cd14bb34d6634b7b5\" returns successfully" Aug 13 01:24:09.989735 env[1382]: time="2025-08-13T01:24:09.989717975Z" level=info msg="StopPodSandbox for \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\"" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.023 [WARNING][5198] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d66sc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"12930a0e-5b90-4155-97e2-3a62414b20c0", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b", Pod:"goldmane-58fd7646b9-d66sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia936fb4ff83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.023 [INFO][5198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.023 [INFO][5198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" iface="eth0" netns="" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.023 [INFO][5198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.023 [INFO][5198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.050 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.050 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.050 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.057 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.057 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.058 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.061180 env[1382]: 2025-08-13 01:24:10.059 [INFO][5198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.063953 env[1382]: time="2025-08-13T01:24:10.061367432Z" level=info msg="TearDown network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\" successfully" Aug 13 01:24:10.063953 env[1382]: time="2025-08-13T01:24:10.061388554Z" level=info msg="StopPodSandbox for \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\" returns successfully" Aug 13 01:24:10.063953 env[1382]: time="2025-08-13T01:24:10.061773256Z" level=info msg="RemovePodSandbox for \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\"" Aug 13 01:24:10.063953 env[1382]: time="2025-08-13T01:24:10.061790552Z" level=info msg="Forcibly stopping sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\"" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.088 [WARNING][5219] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--d66sc-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"12930a0e-5b90-4155-97e2-3a62414b20c0", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4f324e95bb0e9fe644a13a4e917416a1c730ac1aa16adecb370f1c08a9ef202b", Pod:"goldmane-58fd7646b9-d66sc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia936fb4ff83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.088 [INFO][5219] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.088 [INFO][5219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" iface="eth0" netns="" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.088 [INFO][5219] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.088 [INFO][5219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.103 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.103 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.103 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.107 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.107 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" HandleID="k8s-pod-network.eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Workload="localhost-k8s-goldmane--58fd7646b9--d66sc-eth0" Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.107 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.110514 env[1382]: 2025-08-13 01:24:10.109 [INFO][5219] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c" Aug 13 01:24:10.112836 env[1382]: time="2025-08-13T01:24:10.110530043Z" level=info msg="TearDown network for sandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\" successfully" Aug 13 01:24:10.115922 env[1382]: time="2025-08-13T01:24:10.115901109Z" level=info msg="RemovePodSandbox \"eb6facd1ea36e49bed274d2a1b8af0c7943bd47c3d9bb2b8764f4f035897e36c\" returns successfully" Aug 13 01:24:10.116292 env[1382]: time="2025-08-13T01:24:10.116280657Z" level=info msg="StopPodSandbox for \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\"" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.146 [WARNING][5242] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0", GenerateName:"calico-kube-controllers-65f7979575-", Namespace:"calico-system", SelfLink:"", UID:"4c32448d-bd5d-4dff-a3e1-988c6198e659", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f7979575", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5", Pod:"calico-kube-controllers-65f7979575-9d4tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali213d7d89350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.147 [INFO][5242] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.147 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" iface="eth0" netns="" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.147 [INFO][5242] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.147 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.163 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.163 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.163 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.167 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.167 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.168 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.171630 env[1382]: 2025-08-13 01:24:10.170 [INFO][5242] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.172636 env[1382]: time="2025-08-13T01:24:10.172016019Z" level=info msg="TearDown network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\" successfully" Aug 13 01:24:10.172636 env[1382]: time="2025-08-13T01:24:10.172036470Z" level=info msg="StopPodSandbox for \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\" returns successfully" Aug 13 01:24:10.172636 env[1382]: time="2025-08-13T01:24:10.172308492Z" level=info msg="RemovePodSandbox for \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\"" Aug 13 01:24:10.172636 env[1382]: time="2025-08-13T01:24:10.172325937Z" level=info msg="Forcibly stopping sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\"" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.195 [WARNING][5264] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0", GenerateName:"calico-kube-controllers-65f7979575-", Namespace:"calico-system", SelfLink:"", UID:"4c32448d-bd5d-4dff-a3e1-988c6198e659", ResourceVersion:"959", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65f7979575", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5659649167bb37ef941fc054e308df32359c580bfe6b0b5d04ca7ac03d368ce5", Pod:"calico-kube-controllers-65f7979575-9d4tp", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali213d7d89350", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.196 [INFO][5264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.196 [INFO][5264] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" iface="eth0" netns="" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.196 [INFO][5264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.196 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.217 [INFO][5271] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.217 [INFO][5271] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.217 [INFO][5271] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.221 [WARNING][5271] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.221 [INFO][5271] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" HandleID="k8s-pod-network.50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Workload="localhost-k8s-calico--kube--controllers--65f7979575--9d4tp-eth0" Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.222 [INFO][5271] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.225973 env[1382]: 2025-08-13 01:24:10.223 [INFO][5264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8" Aug 13 01:24:10.225973 env[1382]: time="2025-08-13T01:24:10.224695223Z" level=info msg="TearDown network for sandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\" successfully" Aug 13 01:24:10.231559 env[1382]: time="2025-08-13T01:24:10.231543915Z" level=info msg="RemovePodSandbox \"50dca8a6a1dd25344f1d6346513145e38d8d4ca5c36fcd5592acb126d39e99a8\" returns successfully" Aug 13 01:24:10.231932 env[1382]: time="2025-08-13T01:24:10.231918522Z" level=info msg="StopPodSandbox for \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\"" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.261 [WARNING][5285] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeef8dd3-2401-4787-8ede-caf069e52bbf", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3", Pod:"calico-apiserver-67b95cb99-9ggd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd7e4cd8209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.261 [INFO][5285] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.261 [INFO][5285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" iface="eth0" netns="" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.261 [INFO][5285] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.261 [INFO][5285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.280 [INFO][5292] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.280 [INFO][5292] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.280 [INFO][5292] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.285 [WARNING][5292] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.285 [INFO][5292] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.286 [INFO][5292] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.289301 env[1382]: 2025-08-13 01:24:10.287 [INFO][5285] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.290662 env[1382]: time="2025-08-13T01:24:10.289492333Z" level=info msg="TearDown network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\" successfully" Aug 13 01:24:10.290662 env[1382]: time="2025-08-13T01:24:10.289512724Z" level=info msg="StopPodSandbox for \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\" returns successfully" Aug 13 01:24:10.299695 env[1382]: time="2025-08-13T01:24:10.299670256Z" level=info msg="RemovePodSandbox for \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\"" Aug 13 01:24:10.300393 env[1382]: time="2025-08-13T01:24:10.299696845Z" level=info msg="Forcibly stopping sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\"" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.322 [WARNING][5308] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0", GenerateName:"calico-apiserver-67b95cb99-", Namespace:"calico-apiserver", SelfLink:"", UID:"eeef8dd3-2401-4787-8ede-caf069e52bbf", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67b95cb99", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aaaae2f715e7d88ded37704e1221d6dd681a67c9ba39a1958007e8c7167d82d3", Pod:"calico-apiserver-67b95cb99-9ggd9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd7e4cd8209", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.322 [INFO][5308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.322 [INFO][5308] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" iface="eth0" netns="" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.322 [INFO][5308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.322 [INFO][5308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.343 [INFO][5316] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.343 [INFO][5316] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.343 [INFO][5316] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.348 [WARNING][5316] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.348 [INFO][5316] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" HandleID="k8s-pod-network.3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Workload="localhost-k8s-calico--apiserver--67b95cb99--9ggd9-eth0" Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.349 [INFO][5316] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.353180 env[1382]: 2025-08-13 01:24:10.351 [INFO][5308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a" Aug 13 01:24:10.355923 env[1382]: time="2025-08-13T01:24:10.353213159Z" level=info msg="TearDown network for sandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\" successfully" Aug 13 01:24:10.361756 env[1382]: time="2025-08-13T01:24:10.361736233Z" level=info msg="RemovePodSandbox \"3e4d21963abef3b06d333673cf344f4d1c10ebd22dfb2f388d29d696d7742f1a\" returns successfully" Aug 13 01:24:10.362099 env[1382]: time="2025-08-13T01:24:10.362028399Z" level=info msg="StopPodSandbox for \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\"" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.422 [WARNING][5331] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0722f301-b3c1-4eb6-ad2d-cc09409b96c2", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7", Pod:"coredns-7c65d6cfc9-t8vb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35276fe512", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.423 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.423 [INFO][5331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" iface="eth0" netns="" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.423 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.423 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.441 [INFO][5338] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.441 [INFO][5338] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.441 [INFO][5338] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.446 [WARNING][5338] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.446 [INFO][5338] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.447 [INFO][5338] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.451217 env[1382]: 2025-08-13 01:24:10.449 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.452569 env[1382]: time="2025-08-13T01:24:10.451458716Z" level=info msg="TearDown network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\" successfully" Aug 13 01:24:10.452569 env[1382]: time="2025-08-13T01:24:10.451479197Z" level=info msg="StopPodSandbox for \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\" returns successfully" Aug 13 01:24:10.452569 env[1382]: time="2025-08-13T01:24:10.451776515Z" level=info msg="RemovePodSandbox for \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\"" Aug 13 01:24:10.452569 env[1382]: time="2025-08-13T01:24:10.451792528Z" level=info msg="Forcibly stopping sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\"" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.484 [WARNING][5352] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"0722f301-b3c1-4eb6-ad2d-cc09409b96c2", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc205b519a72892cd83d55bed09a000108c1ac8b788ac46be2f5ccd36ba985b7", Pod:"coredns-7c65d6cfc9-t8vb4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid35276fe512", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.485 [INFO][5352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.485 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" iface="eth0" netns="" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.485 [INFO][5352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.485 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.501 [INFO][5359] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.501 [INFO][5359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.501 [INFO][5359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.640 [WARNING][5359] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.640 [INFO][5359] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" HandleID="k8s-pod-network.3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Workload="localhost-k8s-coredns--7c65d6cfc9--t8vb4-eth0" Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.641 [INFO][5359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:10.644702 env[1382]: 2025-08-13 01:24:10.643 [INFO][5352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917" Aug 13 01:24:10.653997 env[1382]: time="2025-08-13T01:24:10.644734378Z" level=info msg="TearDown network for sandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\" successfully" Aug 13 01:24:11.295585 env[1382]: time="2025-08-13T01:24:11.295519177Z" level=info msg="RemovePodSandbox \"3c9d512f270b3f8067a730dbdae6e9e3fe324ed604bf5cfe32a26fa9ed47e917\" returns successfully" Aug 13 01:24:11.305023 env[1382]: time="2025-08-13T01:24:11.304956372Z" level=info msg="StopPodSandbox for \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\"" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.441 [WARNING][5375] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e", Pod:"coredns-7c65d6cfc9-gg2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie666ed159cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.442 [INFO][5375] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.442 [INFO][5375] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" iface="eth0" netns="" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.442 [INFO][5375] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.442 [INFO][5375] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.541 [INFO][5382] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.544 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.544 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.554 [WARNING][5382] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.554 [INFO][5382] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.555 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:11.559012 env[1382]: 2025-08-13 01:24:11.557 [INFO][5375] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.562798 env[1382]: time="2025-08-13T01:24:11.559313284Z" level=info msg="TearDown network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\" successfully" Aug 13 01:24:11.562798 env[1382]: time="2025-08-13T01:24:11.559337444Z" level=info msg="StopPodSandbox for \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\" returns successfully" Aug 13 01:24:11.562798 env[1382]: time="2025-08-13T01:24:11.561037540Z" level=info msg="RemovePodSandbox for \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\"" Aug 13 01:24:11.562798 env[1382]: time="2025-08-13T01:24:11.561060594Z" level=info msg="Forcibly stopping sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\"" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.596 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a40d20da-5325-4fa5-9cd5-94a58f7ee4b0", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3069a16d2c18fcf082e0a5533c02ad86e577bbaf7c8cc38283fc3e95aa104f6e", Pod:"coredns-7c65d6cfc9-gg2v2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie666ed159cc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.597 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.597 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" iface="eth0" netns="" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.597 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.597 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.623 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.624 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.624 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.631 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.631 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" HandleID="k8s-pod-network.ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Workload="localhost-k8s-coredns--7c65d6cfc9--gg2v2-eth0" Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.632 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:11.635100 env[1382]: 2025-08-13 01:24:11.633 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d" Aug 13 01:24:11.637672 env[1382]: time="2025-08-13T01:24:11.635206926Z" level=info msg="TearDown network for sandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\" successfully" Aug 13 01:24:11.639213 env[1382]: time="2025-08-13T01:24:11.639200279Z" level=info msg="RemovePodSandbox \"ec778eecec78b93364ad0bd0af3905ceef67f70cfa4f7449aae0ae38ac4c6d4d\" returns successfully" Aug 13 01:24:11.639584 env[1382]: time="2025-08-13T01:24:11.639567185Z" level=info msg="StopPodSandbox for \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\"" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.666 [WARNING][5419] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kfxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24914c97-a643-4e2a-b954-9959ef2f43e1", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e", Pod:"csi-node-driver-7kfxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab34fc401ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.666 [INFO][5419] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.666 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" iface="eth0" netns="" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.666 [INFO][5419] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.666 [INFO][5419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.680 [INFO][5426] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.680 [INFO][5426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.680 [INFO][5426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.684 [WARNING][5426] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.684 [INFO][5426] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.684 [INFO][5426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:11.687782 env[1382]: 2025-08-13 01:24:11.686 [INFO][5419] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.690426 env[1382]: time="2025-08-13T01:24:11.687780929Z" level=info msg="TearDown network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\" successfully" Aug 13 01:24:11.690426 env[1382]: time="2025-08-13T01:24:11.687801192Z" level=info msg="StopPodSandbox for \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\" returns successfully" Aug 13 01:24:11.690426 env[1382]: time="2025-08-13T01:24:11.688105707Z" level=info msg="RemovePodSandbox for \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\"" Aug 13 01:24:11.690426 env[1382]: time="2025-08-13T01:24:11.688124850Z" level=info msg="Forcibly stopping sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\"" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.709 [WARNING][5440] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7kfxn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"24914c97-a643-4e2a-b954-9959ef2f43e1", ResourceVersion:"1060", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 23, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5832d21db40b0c94258e2dae2d17b911195105916d23b98f620520f0e435928e", Pod:"csi-node-driver-7kfxn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliab34fc401ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.709 [INFO][5440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.709 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" iface="eth0" netns="" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.709 [INFO][5440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.709 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.723 [INFO][5447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.724 [INFO][5447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.724 [INFO][5447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.728 [WARNING][5447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.728 [INFO][5447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" HandleID="k8s-pod-network.1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Workload="localhost-k8s-csi--node--driver--7kfxn-eth0" Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.729 [INFO][5447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:11.732109 env[1382]: 2025-08-13 01:24:11.730 [INFO][5440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b" Aug 13 01:24:11.736162 env[1382]: time="2025-08-13T01:24:11.732328194Z" level=info msg="TearDown network for sandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\" successfully" Aug 13 01:24:11.736566 env[1382]: time="2025-08-13T01:24:11.736552976Z" level=info msg="RemovePodSandbox \"1e99645540af86f2760c41d1b2555e989b46d6986be734d4feef0a6970fe113b\" returns successfully" Aug 13 01:24:11.736911 env[1382]: time="2025-08-13T01:24:11.736891819Z" level=info msg="StopPodSandbox for \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\"" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.758 [WARNING][5461] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" WorkloadEndpoint="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.758 [INFO][5461] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.758 [INFO][5461] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" iface="eth0" netns="" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.758 [INFO][5461] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.758 [INFO][5461] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.779 [INFO][5469] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.780 [INFO][5469] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.780 [INFO][5469] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.783 [WARNING][5469] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.783 [INFO][5469] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.784 [INFO][5469] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:11.786964 env[1382]: 2025-08-13 01:24:11.785 [INFO][5461] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.788750 env[1382]: time="2025-08-13T01:24:11.786974564Z" level=info msg="TearDown network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\" successfully" Aug 13 01:24:11.788750 env[1382]: time="2025-08-13T01:24:11.786997321Z" level=info msg="StopPodSandbox for \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\" returns successfully" Aug 13 01:24:11.788750 env[1382]: time="2025-08-13T01:24:11.787394709Z" level=info msg="RemovePodSandbox for \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\"" Aug 13 01:24:11.788750 env[1382]: time="2025-08-13T01:24:11.787411977Z" level=info msg="Forcibly stopping sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\"" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.811 [WARNING][5484] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" WorkloadEndpoint="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.811 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.811 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" iface="eth0" netns="" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.811 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.811 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.827 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.828 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.828 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.831 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.831 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" HandleID="k8s-pod-network.3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Workload="localhost-k8s-whisker--5658fcd68c--wsmfs-eth0" Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.832 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:24:11.834928 env[1382]: 2025-08-13 01:24:11.833 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153" Aug 13 01:24:11.836683 env[1382]: time="2025-08-13T01:24:11.835180571Z" level=info msg="TearDown network for sandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\" successfully" Aug 13 01:24:11.838053 env[1382]: time="2025-08-13T01:24:11.838039686Z" level=info msg="RemovePodSandbox \"3f132e079d007598727838887ce3a31b3e56bd0ebf31b216e203c57698a55153\" returns successfully" Aug 13 01:24:20.130652 systemd[1]: Started sshd@7-139.178.70.100:22-139.178.68.195:41716.service. Aug 13 01:24:20.193767 kernel: kauditd_printk_skb: 8 callbacks suppressed Aug 13 01:24:20.199824 kernel: audit: type=1130 audit(1755048260.135:427): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.100:22-139.178.68.195:41716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:20.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.100:22-139.178.68.195:41716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:20.282000 audit[5532]: USER_ACCT pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:20.287985 kernel: audit: type=1101 audit(1755048260.282:428): pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:20.288033 sshd[5532]: Accepted publickey for core from 139.178.68.195 port 41716 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:24:20.301039 kernel: audit: type=1103 audit(1755048260.287:429): pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:20.301076 kernel: audit: type=1006 audit(1755048260.287:430): pid=5532 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Aug 13 01:24:20.301095 kernel: audit: type=1300 audit(1755048260.287:430): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3f937850 a2=3 a3=0 items=0 ppid=1 pid=5532 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:20.301111 kernel: audit: type=1327 audit(1755048260.287:430): proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:20.287000 audit[5532]: CRED_ACQ pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:20.287000 audit[5532]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3f937850 a2=3 a3=0 items=0 ppid=1 pid=5532 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:20.287000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:20.302169 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:20.330452 systemd-logind[1346]: New session 10 of user core. Aug 13 01:24:20.331769 systemd[1]: Started session-10.scope. Aug 13 01:24:20.334000 audit[5532]: USER_START pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:20.338000 audit[5537]: CRED_ACQ pid=5537 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:20.341994 kernel: audit: type=1105 audit(1755048260.334:431): pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:20.342280 kernel: audit: type=1103 audit(1755048260.338:432): pid=5537 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:21.591485 sshd[5532]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:21.606418 kernel: audit: type=1106 audit(1755048261.592:433): pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:21.608794 kernel: audit: type=1104 audit(1755048261.596:434): pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:21.592000 audit[5532]: USER_END pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:21.596000 audit[5532]: CRED_DISP pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:21.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.100:22-139.178.68.195:41716 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:21.602802 systemd[1]: sshd@7-139.178.70.100:22-139.178.68.195:41716.service: Deactivated successfully. Aug 13 01:24:21.604314 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:24:21.604842 systemd-logind[1346]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:24:21.605406 systemd-logind[1346]: Removed session 10. Aug 13 01:24:25.962004 kubelet[2284]: I0813 01:24:25.961979 2284 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:24:26.246000 audit[5569]: NETFILTER_CFG table=filter:123 family=2 entries=10 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:26.270113 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 01:24:26.276214 kernel: audit: type=1325 audit(1755048266.246:436): table=filter:123 family=2 entries=10 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:26.277271 kernel: audit: type=1300 audit(1755048266.246:436): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffca0a86c10 a2=0 a3=7ffca0a86bfc items=0 ppid=2387 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:26.277298 kernel: audit: type=1327 audit(1755048266.246:436): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:26.277314 kernel: audit: type=1325 audit(1755048266.260:437): table=nat:124 family=2 entries=36 op=nft_register_chain pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:26.277340 kernel: audit: type=1300 audit(1755048266.260:437): arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffca0a86c10 a2=0 a3=7ffca0a86bfc items=0 ppid=2387 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:26.277948 kernel: audit: type=1327 audit(1755048266.260:437): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:26.246000 audit[5569]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffca0a86c10 a2=0 a3=7ffca0a86bfc items=0 ppid=2387 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:26.246000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:26.260000 audit[5569]: NETFILTER_CFG table=nat:124 family=2 entries=36 op=nft_register_chain pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:26.260000 audit[5569]: SYSCALL arch=c000003e syscall=46 success=yes exit=12004 a0=3 a1=7ffca0a86c10 a2=0 a3=7ffca0a86bfc items=0 ppid=2387 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:26.260000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:26.618781 systemd[1]: Started sshd@8-139.178.70.100:22-139.178.68.195:41726.service. Aug 13 01:24:26.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.100:22-139.178.68.195:41726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:26.625736 kernel: audit: type=1130 audit(1755048266.620:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.100:22-139.178.68.195:41726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:26.736000 audit[5570]: USER_ACCT pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:26.737300 sshd[5570]: Accepted publickey for core from 139.178.68.195 port 41726 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:24:26.741741 kernel: audit: type=1101 audit(1755048266.736:439): pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:26.742272 kernel: audit: type=1103 audit(1755048266.741:440): pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:26.741000 audit[5570]: CRED_ACQ pid=5570 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:26.743119 sshd[5570]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:26.749717 kernel: audit: type=1006 audit(1755048266.741:441): pid=5570 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Aug 13 01:24:26.741000 audit[5570]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe8e9d0ae0 a2=3 a3=0 items=0 ppid=1 pid=5570 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:26.741000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:26.774429 systemd-logind[1346]: New session 11 of user core. Aug 13 01:24:26.775753 systemd[1]: Started session-11.scope. Aug 13 01:24:26.780000 audit[5570]: USER_START pid=5570 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:26.781000 audit[5573]: CRED_ACQ pid=5573 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:30.937989 sshd[5570]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:30.983000 audit[5570]: USER_END pid=5570 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:30.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.100:22-139.178.68.195:33136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:30.986000 audit[5570]: CRED_DISP pid=5570 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:30.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.100:22-139.178.68.195:41726 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:30.982719 systemd[1]: Started sshd@9-139.178.70.100:22-139.178.68.195:33136.service. Aug 13 01:24:30.995308 systemd[1]: sshd@8-139.178.70.100:22-139.178.68.195:41726.service: Deactivated successfully. Aug 13 01:24:30.997788 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:24:30.999065 systemd-logind[1346]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:24:31.001464 systemd-logind[1346]: Removed session 11. Aug 13 01:24:31.137000 audit[5582]: USER_ACCT pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:31.139337 sshd[5582]: Accepted publickey for core from 139.178.68.195 port 33136 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:24:31.138000 audit[5582]: CRED_ACQ pid=5582 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:31.138000 audit[5582]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef6148680 a2=3 a3=0 items=0 ppid=1 pid=5582 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:31.138000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:31.150799 systemd[1]: Started session-12.scope. Aug 13 01:24:31.143475 sshd[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:31.152759 systemd-logind[1346]: New session 12 of user core. Aug 13 01:24:31.157000 audit[5582]: USER_START pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:31.159000 audit[5587]: CRED_ACQ pid=5587 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:35.045687 kubelet[2284]: E0813 01:24:34.937331 2284 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.779s" Aug 13 01:24:36.018669 sshd[5582]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:36.077211 systemd[1]: Started sshd@10-139.178.70.100:22-139.178.68.195:33150.service. Aug 13 01:24:36.100356 kernel: kauditd_printk_skb: 15 callbacks suppressed Aug 13 01:24:36.111157 kernel: audit: type=1130 audit(1755048276.078:453): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.100:22-139.178.68.195:33150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:36.115919 kernel: audit: type=1106 audit(1755048276.084:454): pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.116998 kernel: audit: type=1104 audit(1755048276.088:455): pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.118974 kernel: audit: type=1131 audit(1755048276.102:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.100:22-139.178.68.195:33136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:36.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.100:22-139.178.68.195:33150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:36.084000 audit[5582]: USER_END pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.088000 audit[5582]: CRED_DISP pid=5582 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.100:22-139.178.68.195:33136 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:36.103735 systemd[1]: sshd@9-139.178.70.100:22-139.178.68.195:33136.service: Deactivated successfully. Aug 13 01:24:36.107825 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:24:36.107860 systemd-logind[1346]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:24:36.112130 systemd-logind[1346]: Removed session 12. Aug 13 01:24:36.252000 audit[5594]: USER_ACCT pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.260824 kernel: audit: type=1101 audit(1755048276.252:457): pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.266297 kernel: audit: type=1103 audit(1755048276.254:458): pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.266323 kernel: audit: type=1006 audit(1755048276.254:459): pid=5594 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Aug 13 01:24:36.266342 kernel: audit: type=1300 audit(1755048276.254:459): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfae08410 a2=3 a3=0 items=0 ppid=1 pid=5594 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:36.254000 audit[5594]: CRED_ACQ pid=5594 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.268259 kernel: audit: type=1327 audit(1755048276.254:459): proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:36.254000 audit[5594]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcfae08410 a2=3 a3=0 items=0 ppid=1 pid=5594 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:36.286858 kernel: audit: type=1105 audit(1755048276.279:460): pid=5594 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.254000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:36.279000 audit[5594]: USER_START pid=5594 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.283000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:36.291214 sshd[5594]: Accepted publickey for core from 139.178.68.195 port 33150 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:24:36.268339 sshd[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:36.277553 systemd-logind[1346]: New session 13 of user core. Aug 13 01:24:36.277854 systemd[1]: Started session-13.scope. Aug 13 01:24:39.506893 kubelet[2284]: E0813 01:24:39.502646 2284 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.194s" Aug 13 01:24:39.900083 systemd[1]: run-containerd-runc-k8s.io-332af55ca467e9a4d872eb9cf8c615c87c81d0c68cd5256c15dcbe20daf3a3fc-runc.ngy3te.mount: Deactivated successfully. Aug 13 01:24:42.274000 audit[5677]: NETFILTER_CFG table=filter:125 family=2 entries=9 op=nft_register_rule pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:42.332338 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 01:24:42.339773 kernel: audit: type=1325 audit(1755048282.274:462): table=filter:125 family=2 entries=9 op=nft_register_rule pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:42.341960 kernel: audit: type=1300 audit(1755048282.274:462): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd024c12a0 a2=0 a3=7ffd024c128c items=0 ppid=2387 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:42.343258 kernel: audit: type=1327 audit(1755048282.274:462): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:42.343343 kernel: audit: type=1325 audit(1755048282.287:463): table=nat:126 family=2 entries=31 op=nft_register_chain pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:42.344670 kernel: audit: type=1300 audit(1755048282.287:463): arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffd024c12a0 a2=0 a3=7ffd024c128c items=0 ppid=2387 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:42.346555 kernel: audit: type=1327 audit(1755048282.287:463): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:42.274000 audit[5677]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffd024c12a0 a2=0 a3=7ffd024c128c items=0 ppid=2387 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:42.274000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:42.287000 audit[5677]: NETFILTER_CFG table=nat:126 family=2 entries=31 op=nft_register_chain pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:24:42.287000 audit[5677]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7ffd024c12a0 a2=0 a3=7ffd024c128c items=0 ppid=2387 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:42.287000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:24:43.341764 sshd[5594]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:43.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.100:22-139.178.68.195:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:43.388309 kernel: audit: type=1130 audit(1755048283.379:464): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.100:22-139.178.68.195:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:43.379609 systemd[1]: Started sshd@11-139.178.70.100:22-139.178.68.195:60498.service. Aug 13 01:24:43.391000 audit[5594]: USER_END pid=5594 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:43.395000 audit[5594]: CRED_DISP pid=5594 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:43.400468 kernel: audit: type=1106 audit(1755048283.391:465): pid=5594 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:43.400516 kernel: audit: type=1104 audit(1755048283.395:466): pid=5594 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:43.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.100:22-139.178.68.195:33150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:43.401685 systemd[1]: sshd@10-139.178.70.100:22-139.178.68.195:33150.service: Deactivated successfully. Aug 13 01:24:43.406002 kernel: audit: type=1131 audit(1755048283.400:467): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.100:22-139.178.68.195:33150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:43.405109 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:24:43.405224 systemd-logind[1346]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:24:43.406528 systemd-logind[1346]: Removed session 13. Aug 13 01:24:43.454000 audit[5678]: USER_ACCT pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:43.456046 sshd[5678]: Accepted publickey for core from 139.178.68.195 port 60498 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:24:43.455000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:43.455000 audit[5678]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe662f5580 a2=3 a3=0 items=0 ppid=1 pid=5678 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:43.455000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:43.457920 sshd[5678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:43.461316 systemd-logind[1346]: New session 14 of user core. Aug 13 01:24:43.461609 systemd[1]: Started session-14.scope. Aug 13 01:24:43.462000 audit[5678]: USER_START pid=5678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:43.463000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:44.021884 sshd[5678]: pam_unix(sshd:session): session closed for user core Aug 13 01:24:44.020000 audit[5678]: USER_END pid=5678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:44.020000 audit[5678]: CRED_DISP pid=5678 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:44.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.100:22-139.178.68.195:60498 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:44.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.100:22-139.178.68.195:60514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:24:44.027941 systemd[1]: sshd@11-139.178.70.100:22-139.178.68.195:60498.service: Deactivated successfully. Aug 13 01:24:44.029532 systemd-logind[1346]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:24:44.030310 systemd[1]: Started sshd@12-139.178.70.100:22-139.178.68.195:60514.service. Aug 13 01:24:44.031004 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:24:44.031547 systemd-logind[1346]: Removed session 14. Aug 13 01:24:44.088000 audit[5691]: USER_ACCT pid=5691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:44.089000 audit[5691]: CRED_ACQ pid=5691 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:44.089000 audit[5691]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffdf17b020 a2=3 a3=0 items=0 ppid=1 pid=5691 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:24:44.089000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:24:44.091892 sshd[5691]: Accepted publickey for core from 139.178.68.195 port 60514 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:24:44.092522 sshd[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:24:44.096491 systemd-logind[1346]: New session 15 of user core. Aug 13 01:24:44.096866 systemd[1]: Started session-15.scope. Aug 13 01:24:44.101000 audit[5691]: USER_START pid=5691 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:24:44.101000 audit[5694]: CRED_ACQ pid=5694 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:27.586582 kernel: kauditd_printk_skb: 18 callbacks suppressed Aug 13 01:25:27.728388 kernel: audit: type=1325 audit(1755048327.568:482): table=filter:127 family=2 entries=20 op=nft_register_rule pid=5730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:27.738823 kernel: audit: type=1300 audit(1755048327.568:482): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc50ad3150 a2=0 a3=7ffc50ad313c items=0 ppid=2387 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:27.738875 kernel: audit: type=1327 audit(1755048327.568:482): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:27.738894 kernel: audit: type=1325 audit(1755048327.586:483): table=nat:128 family=2 entries=26 op=nft_register_rule pid=5730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:27.740577 kernel: audit: type=1300 audit(1755048327.586:483): arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc50ad3150 a2=0 a3=0 items=0 ppid=2387 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:27.743103 kernel: audit: type=1327 audit(1755048327.586:483): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:27.568000 audit[5730]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=5730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:27.568000 audit[5730]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffc50ad3150 a2=0 a3=7ffc50ad313c items=0 ppid=2387 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:27.568000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:27.586000 audit[5730]: NETFILTER_CFG table=nat:128 family=2 entries=26 op=nft_register_rule pid=5730 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:27.586000 audit[5730]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffc50ad3150 a2=0 a3=0 items=0 ppid=2387 pid=5730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:27.586000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:31.981000 audit[5739]: NETFILTER_CFG table=filter:129 family=2 entries=32 op=nft_register_rule pid=5739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:32.082127 kernel: audit: type=1325 audit(1755048331.981:484): table=filter:129 family=2 entries=32 op=nft_register_rule pid=5739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:32.089592 kernel: audit: type=1300 audit(1755048331.981:484): arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffe85cc3b90 a2=0 a3=7ffe85cc3b7c items=0 ppid=2387 pid=5739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:32.091695 kernel: audit: type=1327 audit(1755048331.981:484): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:32.091723 kernel: audit: type=1325 audit(1755048331.996:485): table=nat:130 family=2 entries=26 op=nft_register_rule pid=5739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:31.981000 audit[5739]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffe85cc3b90 a2=0 a3=7ffe85cc3b7c items=0 ppid=2387 pid=5739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:31.981000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:31.996000 audit[5739]: NETFILTER_CFG table=nat:130 family=2 entries=26 op=nft_register_rule pid=5739 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:31.996000 audit[5739]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffe85cc3b90 a2=0 a3=0 items=0 ppid=2387 pid=5739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:31.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:40.973282 sshd[5691]: pam_unix(sshd:session): session closed for user core Aug 13 01:25:41.072605 kernel: kauditd_printk_skb: 2 callbacks suppressed Aug 13 01:25:41.076500 kernel: audit: type=1130 audit(1755048341.030:486): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.100:22-139.178.68.195:41416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:25:41.078295 kernel: audit: type=1106 audit(1755048341.035:487): pid=5691 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.078319 kernel: audit: type=1104 audit(1755048341.041:488): pid=5691 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.078339 kernel: audit: type=1131 audit(1755048341.043:489): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.100:22-139.178.68.195:60514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:25:41.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.100:22-139.178.68.195:41416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:25:41.035000 audit[5691]: USER_END pid=5691 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.041000 audit[5691]: CRED_DISP pid=5691 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.100:22-139.178.68.195:60514 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:25:41.032295 systemd[1]: Started sshd@13-139.178.70.100:22-139.178.68.195:41416.service. Aug 13 01:25:41.044663 systemd[1]: sshd@12-139.178.70.100:22-139.178.68.195:60514.service: Deactivated successfully. Aug 13 01:25:41.048183 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:25:41.048526 systemd-logind[1346]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:25:41.049109 systemd-logind[1346]: Removed session 15. Aug 13 01:25:41.297000 audit[5753]: USER_ACCT pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.327384 kernel: audit: type=1101 audit(1755048341.297:490): pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.354353 kernel: audit: type=1103 audit(1755048341.325:491): pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.360512 kernel: audit: type=1006 audit(1755048341.325:492): pid=5753 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Aug 13 01:25:41.360573 kernel: audit: type=1300 audit(1755048341.325:492): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe4f85980 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:41.363256 kernel: audit: type=1327 audit(1755048341.325:492): proctitle=737368643A20636F7265205B707269765D Aug 13 01:25:41.325000 audit[5753]: CRED_ACQ pid=5753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.325000 audit[5753]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe4f85980 a2=3 a3=0 items=0 ppid=1 pid=5753 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:41.325000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:25:41.437702 sshd[5753]: Accepted publickey for core from 139.178.68.195 port 41416 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:25:41.339113 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:25:41.548296 systemd-logind[1346]: New session 16 of user core. Aug 13 01:25:41.558527 systemd[1]: Started session-16.scope. Aug 13 01:25:41.683860 kernel: audit: type=1105 audit(1755048341.678:493): pid=5753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.678000 audit[5753]: USER_START pid=5753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:41.741000 audit[5758]: CRED_ACQ pid=5758 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:25:42.778000 audit[5761]: NETFILTER_CFG table=filter:131 family=2 entries=33 op=nft_register_rule pid=5761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:42.778000 audit[5761]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7fffa5aaabb0 a2=0 a3=7fffa5aaab9c items=0 ppid=2387 pid=5761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:42.778000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:42.783000 audit[5761]: NETFILTER_CFG table=nat:132 family=2 entries=31 op=nft_unregister_chain pid=5761 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:42.783000 audit[5761]: SYSCALL arch=c000003e syscall=46 success=yes exit=7564 a0=3 a1=7fffa5aaabb0 a2=0 a3=0 items=0 ppid=2387 pid=5761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:42.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:44.433000 audit[5766]: NETFILTER_CFG table=filter:133 family=2 entries=34 op=nft_register_rule pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:44.433000 audit[5766]: SYSCALL arch=c000003e syscall=46 success=yes exit=12688 a0=3 a1=7ffda4b23540 a2=0 a3=7ffda4b2352c items=0 ppid=2387 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:44.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:44.439000 audit[5766]: NETFILTER_CFG table=nat:134 family=2 entries=60 op=nft_unregister_chain pid=5766 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:44.439000 audit[5766]: SYSCALL arch=c000003e syscall=46 success=yes exit=16116 a0=3 a1=7ffda4b23540 a2=0 a3=7ffda4b2352c items=0 ppid=2387 pid=5766 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:44.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:45.492000 audit[5769]: NETFILTER_CFG table=filter:135 family=2 entries=37 op=nft_register_rule pid=5769 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:45.492000 audit[5769]: SYSCALL arch=c000003e syscall=46 success=yes exit=14920 a0=3 a1=7ffca90ea080 a2=0 a3=7ffca90ea06c items=0 ppid=2387 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:45.492000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:45.502000 audit[5769]: NETFILTER_CFG table=nat:136 family=2 entries=39 op=nft_unregister_chain pid=5769 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:45.502000 audit[5769]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffca90ea080 a2=0 a3=0 items=0 ppid=2387 pid=5769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:45.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:46.663000 audit[5774]: NETFILTER_CFG table=filter:137 family=2 entries=40 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:46.776612 kernel: kauditd_printk_skb: 19 callbacks suppressed Aug 13 01:25:46.784430 kernel: audit: type=1325 audit(1755048346.663:501): table=filter:137 family=2 entries=40 op=nft_register_rule pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:46.785503 kernel: audit: type=1300 audit(1755048346.663:501): arch=c000003e syscall=46 success=yes exit=14920 a0=3 a1=7fff934ff610 a2=0 a3=7fff934ff5fc items=0 ppid=2387 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:46.785527 kernel: audit: type=1327 audit(1755048346.663:501): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:46.787313 kernel: audit: type=1325 audit(1755048346.678:502): table=nat:138 family=2 entries=30 op=nft_unregister_chain pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:46.787335 kernel: audit: type=1300 audit(1755048346.678:502): arch=c000003e syscall=46 success=yes exit=7940 a0=3 a1=7fff934ff610 a2=0 a3=0 items=0 ppid=2387 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:46.789497 kernel: audit: type=1327 audit(1755048346.678:502): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:46.663000 audit[5774]: SYSCALL arch=c000003e syscall=46 success=yes exit=14920 a0=3 a1=7fff934ff610 a2=0 a3=7fff934ff5fc items=0 ppid=2387 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:46.663000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:46.678000 audit[5774]: NETFILTER_CFG table=nat:138 family=2 entries=30 op=nft_unregister_chain pid=5774 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:46.678000 audit[5774]: SYSCALL arch=c000003e syscall=46 success=yes exit=7940 a0=3 a1=7fff934ff610 a2=0 a3=0 items=0 ppid=2387 pid=5774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:46.678000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:50.755000 audit[5778]: NETFILTER_CFG table=filter:139 family=2 entries=41 op=nft_register_rule pid=5778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:50.985121 kernel: audit: type=1325 audit(1755048350.755:503): table=filter:139 family=2 entries=41 op=nft_register_rule pid=5778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:51.093213 kernel: audit: type=1300 audit(1755048350.755:503): arch=c000003e syscall=46 success=yes exit=15664 a0=3 a1=7fff14e1c660 a2=0 a3=7fff14e1c64c items=0 ppid=2387 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:51.103780 kernel: audit: type=1327 audit(1755048350.755:503): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:51.112637 kernel: audit: type=1325 audit(1755048350.767:504): table=nat:140 family=2 entries=23 op=nft_unregister_chain pid=5778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:50.755000 audit[5778]: SYSCALL arch=c000003e syscall=46 success=yes exit=15664 a0=3 a1=7fff14e1c660 a2=0 a3=7fff14e1c64c items=0 ppid=2387 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:50.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:50.767000 audit[5778]: NETFILTER_CFG table=nat:140 family=2 entries=23 op=nft_unregister_chain pid=5778 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:50.767000 audit[5778]: SYSCALL arch=c000003e syscall=46 success=yes exit=4492 a0=3 a1=7fff14e1c660 a2=0 a3=0 items=0 ppid=2387 pid=5778 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:50.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:53.485068 kernel: kauditd_printk_skb: 2 callbacks suppressed Aug 13 01:25:53.544308 kernel: audit: type=1325 audit(1755048353.470:505): table=filter:141 family=2 entries=43 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:53.548053 kernel: audit: type=1300 audit(1755048353.470:505): arch=c000003e syscall=46 success=yes exit=16408 a0=3 a1=7ffcb6e49920 a2=0 a3=7ffcb6e4990c items=0 ppid=2387 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:53.548080 kernel: audit: type=1327 audit(1755048353.470:505): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:53.548100 kernel: audit: type=1325 audit(1755048353.484:506): table=nat:142 family=2 entries=21 op=nft_unregister_chain pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:53.548117 kernel: audit: type=1300 audit(1755048353.484:506): arch=c000003e syscall=46 success=yes exit=3724 a0=3 a1=7ffcb6e49920 a2=0 a3=0 items=0 ppid=2387 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:53.550177 kernel: audit: type=1327 audit(1755048353.484:506): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:53.470000 audit[5780]: NETFILTER_CFG table=filter:141 family=2 entries=43 op=nft_register_rule pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:53.470000 audit[5780]: SYSCALL arch=c000003e syscall=46 success=yes exit=16408 a0=3 a1=7ffcb6e49920 a2=0 a3=7ffcb6e4990c items=0 ppid=2387 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:53.470000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:53.484000 audit[5780]: NETFILTER_CFG table=nat:142 family=2 entries=21 op=nft_unregister_chain pid=5780 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:53.484000 audit[5780]: SYSCALL arch=c000003e syscall=46 success=yes exit=3724 a0=3 a1=7ffcb6e49920 a2=0 a3=0 items=0 ppid=2387 pid=5780 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:53.484000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:54.401000 audit[5782]: NETFILTER_CFG table=filter:143 family=2 entries=33 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:54.505920 kernel: audit: type=1325 audit(1755048354.401:507): table=filter:143 family=2 entries=33 op=nft_register_rule pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:54.516775 kernel: audit: type=1300 audit(1755048354.401:507): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffa70ddbb0 a2=0 a3=7fffa70ddb9c items=0 ppid=2387 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:54.519361 kernel: audit: type=1327 audit(1755048354.401:507): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:54.519387 kernel: audit: type=1325 audit(1755048354.414:508): table=nat:144 family=2 entries=103 op=nft_register_chain pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:54.401000 audit[5782]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fffa70ddbb0 a2=0 a3=7fffa70ddb9c items=0 ppid=2387 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:54.401000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:54.414000 audit[5782]: NETFILTER_CFG table=nat:144 family=2 entries=103 op=nft_register_chain pid=5782 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:54.414000 audit[5782]: SYSCALL arch=c000003e syscall=46 success=yes exit=45868 a0=3 a1=7fffa70ddbb0 a2=0 a3=7fffa70ddb9c items=0 ppid=2387 pid=5782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:54.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:54.469000 audit[5785]: NETFILTER_CFG table=filter:145 family=2 entries=22 op=nft_register_rule pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:54.469000 audit[5785]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd7a1d37c0 a2=0 a3=7ffd7a1d37ac items=0 ppid=2387 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:54.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:25:54.479000 audit[5785]: NETFILTER_CFG table=nat:146 family=2 entries=36 op=nft_register_rule pid=5785 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Aug 13 01:25:54.479000 audit[5785]: SYSCALL arch=c000003e syscall=46 success=yes exit=11916 a0=3 a1=7ffd7a1d37c0 a2=0 a3=7ffd7a1d37ac items=0 ppid=2387 pid=5785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:25:54.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Aug 13 01:28:58.656674 update_engine[1349]: I0813 01:28:58.648041 1349 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 01:28:58.656674 update_engine[1349]: I0813 01:28:58.650607 1349 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 01:28:58.656674 update_engine[1349]: I0813 01:28:58.660506 1349 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.669180 1349 omaha_request_params.cc:62] Current group set to lts Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.704938 1349 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.704950 1349 update_attempter.cc:643] Scheduling an action processor start. Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.704965 1349 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.709183 1349 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.712646 1349 omaha_request_action.cc:270] Posting an Omaha request to disabled Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.712652 1349 omaha_request_action.cc:271] Request: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: Aug 13 01:28:58.771618 update_engine[1349]: I0813 01:28:58.712658 1349 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:28:58.799404 locksmithd[1405]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 01:28:58.813128 update_engine[1349]: I0813 01:28:58.813102 1349 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:28:58.817165 update_engine[1349]: E0813 01:28:58.817149 1349 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:28:58.817277 update_engine[1349]: I0813 01:28:58.817207 1349 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 01:29:09.564705 update_engine[1349]: I0813 01:29:09.544510 1349 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:29:09.684393 update_engine[1349]: I0813 01:29:09.595728 1349 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:29:09.684393 update_engine[1349]: E0813 01:29:09.598940 1349 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:29:09.684393 update_engine[1349]: I0813 01:29:09.605338 1349 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 01:29:19.573509 update_engine[1349]: I0813 01:29:19.548508 1349 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:29:19.711646 update_engine[1349]: I0813 01:29:19.620709 1349 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:29:19.711646 update_engine[1349]: E0813 01:29:19.633427 1349 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:29:19.711646 update_engine[1349]: I0813 01:29:19.642315 1349 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 01:29:29.569513 update_engine[1349]: I0813 01:29:29.551407 1349 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.596372 1349 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:29:29.716278 update_engine[1349]: E0813 01:29:29.600951 1349 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.601009 1349 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.601015 1349 omaha_request_action.cc:621] Omaha request response: Aug 13 01:29:29.716278 update_engine[1349]: E0813 01:29:29.609817 1349 omaha_request_action.cc:640] Omaha request network transfer failed. Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.612764 1349 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.612771 1349 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.612774 1349 update_attempter.cc:306] Processing Done. Aug 13 01:29:29.716278 update_engine[1349]: E0813 01:29:29.615337 1349 update_attempter.cc:619] Update failed. Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.615345 1349 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.615346 1349 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.615350 1349 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.665150 1349 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.672031 1349 omaha_request_action.cc:270] Posting an Omaha request to disabled Aug 13 01:29:29.716278 update_engine[1349]: I0813 01:29:29.672039 1349 omaha_request_action.cc:271] Request: Aug 13 01:29:29.716278 update_engine[1349]: Aug 13 01:29:29.716278 update_engine[1349]: Aug 13 01:29:29.731768 update_engine[1349]: Aug 13 01:29:29.731768 update_engine[1349]: Aug 13 01:29:29.731768 update_engine[1349]: Aug 13 01:29:29.731768 update_engine[1349]: Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672046 1349 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672168 1349 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:29:29.731768 update_engine[1349]: E0813 01:29:29.672230 1349 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672264 1349 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672271 1349 omaha_request_action.cc:621] Omaha request response: Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672274 1349 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672276 1349 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672277 1349 update_attempter.cc:306] Processing Done. Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.672279 1349 update_attempter.cc:310] Error event sent. Aug 13 01:29:29.731768 update_engine[1349]: I0813 01:29:29.675601 1349 update_check_scheduler.cc:74] Next update check in 48m0s Aug 13 01:29:29.765781 locksmithd[1405]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 01:29:29.791782 locksmithd[1405]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 01:29:34.722432 sshd[5753]: pam_unix(sshd:session): session closed for user core Aug 13 01:29:34.833097 kernel: kauditd_printk_skb: 8 callbacks suppressed Aug 13 01:29:34.840177 kernel: audit: type=1130 audit(1755048574.784:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.100:22-139.178.68.195:58538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:29:34.846806 kernel: audit: type=1106 audit(1755048574.787:512): pid=5753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:34.846858 kernel: audit: type=1104 audit(1755048574.791:513): pid=5753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:34.848855 kernel: audit: type=1131 audit(1755048574.797:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.100:22-139.178.68.195:41416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:29:34.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.100:22-139.178.68.195:58538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:29:34.787000 audit[5753]: USER_END pid=5753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:34.791000 audit[5753]: CRED_DISP pid=5753 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:34.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.100:22-139.178.68.195:41416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:29:34.786509 systemd[1]: Started sshd@14-139.178.70.100:22-139.178.68.195:58538.service. Aug 13 01:29:34.799028 systemd[1]: sshd@13-139.178.70.100:22-139.178.68.195:41416.service: Deactivated successfully. Aug 13 01:29:34.803384 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:29:34.803766 systemd-logind[1346]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:29:34.804367 systemd-logind[1346]: Removed session 16. Aug 13 01:29:35.058000 audit[5876]: USER_ACCT pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:35.070456 kernel: audit: type=1101 audit(1755048575.058:515): pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:35.083846 kernel: audit: type=1103 audit(1755048575.063:516): pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:35.083879 kernel: audit: type=1006 audit(1755048575.067:517): pid=5876 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Aug 13 01:29:35.085392 kernel: audit: type=1300 audit(1755048575.067:517): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaff6afd0 a2=3 a3=0 items=0 ppid=1 pid=5876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:29:35.085420 kernel: audit: type=1327 audit(1755048575.067:517): proctitle=737368643A20636F7265205B707269765D Aug 13 01:29:35.063000 audit[5876]: CRED_ACQ pid=5876 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:35.067000 audit[5876]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffaff6afd0 a2=3 a3=0 items=0 ppid=1 pid=5876 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Aug 13 01:29:35.067000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Aug 13 01:29:35.108864 sshd[5876]: Accepted publickey for core from 139.178.68.195 port 58538 ssh2: RSA SHA256:u64cNhEl5z4ZpRddmMSh52OngMoVu/4Kwwey+GxmIa8 Aug 13 01:29:35.077061 sshd[5876]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:29:35.116121 systemd[1]: Started session-17.scope. Aug 13 01:29:35.116649 systemd-logind[1346]: New session 17 of user core. Aug 13 01:29:35.120000 audit[5876]: USER_START pid=5876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:35.124000 audit[5882]: CRED_ACQ pid=5882 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:29:35.126757 kernel: audit: type=1105 audit(1755048575.120:518): pid=5876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:30:01.553007 env[1382]: time="2025-08-13T01:30:01.532587779Z" level=info msg="shim disconnected" id=5b82c212e505846ec70e0b428a29af1e1d0d96e3e3bef0233f323604b1c327fb Aug 13 01:30:01.553007 env[1382]: time="2025-08-13T01:30:01.532618130Z" level=warning msg="cleaning up after shim disconnected" id=5b82c212e505846ec70e0b428a29af1e1d0d96e3e3bef0233f323604b1c327fb namespace=k8s.io Aug 13 01:30:01.553007 env[1382]: time="2025-08-13T01:30:01.532626302Z" level=info msg="cleaning up dead shim" Aug 13 01:30:01.553007 env[1382]: time="2025-08-13T01:30:01.541901500Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:30:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5907 runtime=io.containerd.runc.v2\n" Aug 13 01:30:01.622704 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b82c212e505846ec70e0b428a29af1e1d0d96e3e3bef0233f323604b1c327fb-rootfs.mount: Deactivated successfully. Aug 13 01:30:05.817918 kubelet[2284]: W0813 01:30:03.828287 2284 reflector.go:484] object-"calico-system"/"node-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:05.817918 kubelet[2284]: W0813 01:30:04.278758 2284 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:07.300968 kubelet[2284]: E0813 01:30:05.652870 2284 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Aug 13 01:30:07.370836 kubelet[2284]: E0813 01:30:06.247068 2284 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Aug 13 01:30:07.793281 kubelet[2284]: W0813 01:30:04.482676 2284 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.065402 kubelet[2284]: W0813 01:30:04.540008 2284 reflector.go:484] object-"calico-system"/"goldmane-key-pair": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.089057 kubelet[2284]: W0813 01:30:04.614575 2284 reflector.go:484] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.100694 kubelet[2284]: W0813 01:30:04.717544 2284 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.100781 kubelet[2284]: W0813 01:30:04.745746 2284 reflector.go:484] object-"calico-apiserver"/"calico-apiserver-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.100842 kubelet[2284]: W0813 01:30:04.810960 2284 reflector.go:484] object-"calico-system"/"typha-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.104612 kubelet[2284]: W0813 01:30:04.817041 2284 reflector.go:484] object-"calico-system"/"cni-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.104612 kubelet[2284]: W0813 01:30:05.002327 2284 reflector.go:484] object-"calico-system"/"tigera-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.104612 kubelet[2284]: W0813 01:30:05.062653 2284 reflector.go:484] object-"calico-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.104612 kubelet[2284]: W0813 01:30:05.079842 2284 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.104612 kubelet[2284]: W0813 01:30:05.189376 2284 reflector.go:484] object-"calico-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.104612 kubelet[2284]: W0813 01:29:15.022448 2284 reflector.go:484] object-"calico-system"/"whisker-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.104612 kubelet[2284]: W0813 01:30:04.003067 2284 reflector.go:484] object-"calico-system"/"goldmane": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.114360 kubelet[2284]: W0813 01:30:05.564162 2284 reflector.go:484] object-"calico-system"/"whisker-backend-key-pair": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.114360 kubelet[2284]: W0813 01:30:05.747263 2284 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Aug 13 01:30:08.317424 kubelet[2284]: E0813 01:30:06.016208 2284 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Aug 13 01:30:08.564763 sshd[5876]: pam_unix(sshd:session): session closed for user core Aug 13 01:30:08.613000 audit[5876]: USER_END pid=5876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:30:08.638496 kernel: kauditd_printk_skb: 1 callbacks suppressed Aug 13 01:30:08.650758 kernel: audit: type=1106 audit(1755048608.613:520): pid=5876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:30:08.654338 kernel: audit: type=1104 audit(1755048608.618:521): pid=5876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:30:08.656156 kernel: audit: type=1131 audit(1755048608.641:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.100:22-139.178.68.195:58538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:30:08.618000 audit[5876]: CRED_DISP pid=5876 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Aug 13 01:30:08.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.100:22-139.178.68.195:58538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Aug 13 01:30:08.637547 systemd[1]: sshd@14-139.178.70.100:22-139.178.68.195:58538.service: Deactivated successfully. Aug 13 01:30:08.650324 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:30:08.650717 systemd-logind[1346]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:30:08.667136 systemd-logind[1346]: Removed session 17. Aug 13 01:30:09.078994 kubelet[2284]: W0813 01:30:05.947211 2284 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-16.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-16.scope: no such file or directory