Jul 10 01:10:17.655854 kernel: Linux version 5.15.186-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Wed Jul 9 23:09:45 -00 2025 Jul 10 01:10:17.655868 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 01:10:17.655874 kernel: Disabled fast string operations Jul 10 01:10:17.655878 kernel: BIOS-provided physical RAM map: Jul 10 01:10:17.655883 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jul 10 01:10:17.655889 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jul 10 01:10:17.655897 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jul 10 01:10:17.655904 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jul 10 01:10:17.655909 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jul 10 01:10:17.655916 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jul 10 01:10:17.655922 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jul 10 01:10:17.655928 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jul 10 01:10:17.655933 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jul 10 01:10:17.655937 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jul 10 01:10:17.655944 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jul 10 01:10:17.655948 kernel: NX (Execute Disable) protection: active Jul 10 01:10:17.655952 kernel: SMBIOS 2.7 present. Jul 10 01:10:17.655957 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jul 10 01:10:17.655961 kernel: vmware: hypercall mode: 0x00 Jul 10 01:10:17.655966 kernel: Hypervisor detected: VMware Jul 10 01:10:17.655971 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jul 10 01:10:17.655975 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jul 10 01:10:17.655980 kernel: vmware: using clock offset of 17252666647 ns Jul 10 01:10:17.655984 kernel: tsc: Detected 3408.000 MHz processor Jul 10 01:10:17.655989 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 10 01:10:17.655994 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 10 01:10:17.655998 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jul 10 01:10:17.656003 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 10 01:10:17.656007 kernel: total RAM covered: 3072M Jul 10 01:10:17.656013 kernel: Found optimal setting for mtrr clean up Jul 10 01:10:17.656018 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jul 10 01:10:17.656022 kernel: Using GB pages for direct mapping Jul 10 01:10:17.656027 kernel: ACPI: Early table checksum verification disabled Jul 10 01:10:17.656031 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jul 10 01:10:17.656036 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jul 10 01:10:17.656040 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jul 10 01:10:17.656045 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jul 10 01:10:17.656049 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 10 01:10:17.656054 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jul 10 01:10:17.656059 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jul 10 01:10:17.656066 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jul 10 01:10:17.656071 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jul 10 01:10:17.656075 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jul 10 01:10:17.656080 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jul 10 01:10:17.656086 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jul 10 01:10:17.656091 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jul 10 01:10:17.656096 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jul 10 01:10:17.656101 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 10 01:10:17.656105 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jul 10 01:10:17.656110 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jul 10 01:10:17.656115 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jul 10 01:10:17.656120 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jul 10 01:10:17.656125 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jul 10 01:10:17.656131 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jul 10 01:10:17.656135 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jul 10 01:10:17.656140 kernel: system APIC only can use physical flat Jul 10 01:10:17.656145 kernel: Setting APIC routing to physical flat. Jul 10 01:10:17.656150 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 10 01:10:17.656155 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 10 01:10:17.656159 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 10 01:10:17.656164 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 10 01:10:17.656169 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 10 01:10:17.656173 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 10 01:10:17.656179 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 10 01:10:17.656184 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 10 01:10:17.656189 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jul 10 01:10:17.656193 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jul 10 01:10:17.656198 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jul 10 01:10:17.656203 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jul 10 01:10:17.656208 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jul 10 01:10:17.656212 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jul 10 01:10:17.656217 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jul 10 01:10:17.656223 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jul 10 01:10:17.656227 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jul 10 01:10:17.656232 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jul 10 01:10:17.656237 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jul 10 01:10:17.656242 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jul 10 01:10:17.656246 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jul 10 01:10:17.656251 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jul 10 01:10:17.656256 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jul 10 01:10:17.656260 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jul 10 01:10:17.656265 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jul 10 01:10:17.656271 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jul 10 01:10:17.656275 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jul 10 01:10:17.656280 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jul 10 01:10:17.656285 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jul 10 01:10:17.656290 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jul 10 01:10:17.656294 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jul 10 01:10:17.656299 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jul 10 01:10:17.656304 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jul 10 01:10:17.656308 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jul 10 01:10:17.656313 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jul 10 01:10:17.656319 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jul 10 01:10:17.656323 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jul 10 01:10:17.656336 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jul 10 01:10:17.656342 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jul 10 01:10:17.656347 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jul 10 01:10:17.656352 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jul 10 01:10:17.656357 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jul 10 01:10:17.656361 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jul 10 01:10:17.656366 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jul 10 01:10:17.656371 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jul 10 01:10:17.656378 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jul 10 01:10:17.656383 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jul 10 01:10:17.656387 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jul 10 01:10:17.656392 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jul 10 01:10:17.656397 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jul 10 01:10:17.656402 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jul 10 01:10:17.656406 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jul 10 01:10:17.656411 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jul 10 01:10:17.656416 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jul 10 01:10:17.656421 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jul 10 01:10:17.656426 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jul 10 01:10:17.656431 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jul 10 01:10:17.656436 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jul 10 01:10:17.656441 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jul 10 01:10:17.656446 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jul 10 01:10:17.656451 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jul 10 01:10:17.656459 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jul 10 01:10:17.656465 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jul 10 01:10:17.656470 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jul 10 01:10:17.656476 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jul 10 01:10:17.656482 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jul 10 01:10:17.656487 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jul 10 01:10:17.656492 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jul 10 01:10:17.656497 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jul 10 01:10:17.656502 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jul 10 01:10:17.656507 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jul 10 01:10:17.656513 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jul 10 01:10:17.656518 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jul 10 01:10:17.656524 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jul 10 01:10:17.656529 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jul 10 01:10:17.656534 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jul 10 01:10:17.656539 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jul 10 01:10:17.656544 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jul 10 01:10:17.656549 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jul 10 01:10:17.656554 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jul 10 01:10:17.656559 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jul 10 01:10:17.656564 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jul 10 01:10:17.656569 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jul 10 01:10:17.656575 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jul 10 01:10:17.656580 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jul 10 01:10:17.656585 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jul 10 01:10:17.656591 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jul 10 01:10:17.656596 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jul 10 01:10:17.656601 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jul 10 01:10:17.656606 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jul 10 01:10:17.656611 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jul 10 01:10:17.656616 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jul 10 01:10:17.656621 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jul 10 01:10:17.656627 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jul 10 01:10:17.656632 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jul 10 01:10:17.656637 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jul 10 01:10:17.656643 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jul 10 01:10:17.656648 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jul 10 01:10:17.656653 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jul 10 01:10:17.656658 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jul 10 01:10:17.656663 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jul 10 01:10:17.656668 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jul 10 01:10:17.656674 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jul 10 01:10:17.656679 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jul 10 01:10:17.656691 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jul 10 01:10:17.656703 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jul 10 01:10:17.656709 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jul 10 01:10:17.656714 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jul 10 01:10:17.656723 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jul 10 01:10:17.656736 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jul 10 01:10:17.656742 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jul 10 01:10:17.656749 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jul 10 01:10:17.656758 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jul 10 01:10:17.656763 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jul 10 01:10:17.656768 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jul 10 01:10:17.656773 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jul 10 01:10:17.656779 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jul 10 01:10:17.656784 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jul 10 01:10:17.656789 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jul 10 01:10:17.656794 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jul 10 01:10:17.656799 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jul 10 01:10:17.656804 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jul 10 01:10:17.656810 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jul 10 01:10:17.656815 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jul 10 01:10:17.656820 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jul 10 01:10:17.656825 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jul 10 01:10:17.656830 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jul 10 01:10:17.656836 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jul 10 01:10:17.656841 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 10 01:10:17.656846 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 10 01:10:17.656853 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jul 10 01:10:17.656861 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jul 10 01:10:17.656875 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jul 10 01:10:17.656882 kernel: Zone ranges: Jul 10 01:10:17.656887 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 10 01:10:17.656893 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jul 10 01:10:17.656898 kernel: Normal empty Jul 10 01:10:17.656903 kernel: Movable zone start for each node Jul 10 01:10:17.656908 kernel: Early memory node ranges Jul 10 01:10:17.656913 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jul 10 01:10:17.656919 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jul 10 01:10:17.656925 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jul 10 01:10:17.656931 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jul 10 01:10:17.656936 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 10 01:10:17.656941 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jul 10 01:10:17.656947 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jul 10 01:10:17.656952 kernel: ACPI: PM-Timer IO Port: 0x1008 Jul 10 01:10:17.656957 kernel: system APIC only can use physical flat Jul 10 01:10:17.656963 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jul 10 01:10:17.656971 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jul 10 01:10:17.656981 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jul 10 01:10:17.656989 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jul 10 01:10:17.656997 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jul 10 01:10:17.657006 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jul 10 01:10:17.657015 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jul 10 01:10:17.657022 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jul 10 01:10:17.657028 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jul 10 01:10:17.657033 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jul 10 01:10:17.657038 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jul 10 01:10:17.657043 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jul 10 01:10:17.657050 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jul 10 01:10:17.657055 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jul 10 01:10:17.657060 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jul 10 01:10:17.657065 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jul 10 01:10:17.657071 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jul 10 01:10:17.657076 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jul 10 01:10:17.657081 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jul 10 01:10:17.657086 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jul 10 01:10:17.657091 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jul 10 01:10:17.657097 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jul 10 01:10:17.657102 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jul 10 01:10:17.657107 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jul 10 01:10:17.657113 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jul 10 01:10:17.657118 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jul 10 01:10:17.657123 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jul 10 01:10:17.657128 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jul 10 01:10:17.657134 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jul 10 01:10:17.657139 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jul 10 01:10:17.657144 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jul 10 01:10:17.657150 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jul 10 01:10:17.657156 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jul 10 01:10:17.657161 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jul 10 01:10:17.657166 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jul 10 01:10:17.657171 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jul 10 01:10:17.657176 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jul 10 01:10:17.657181 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jul 10 01:10:17.657186 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jul 10 01:10:17.657191 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jul 10 01:10:17.657198 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jul 10 01:10:17.657203 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jul 10 01:10:17.657208 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jul 10 01:10:17.657213 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jul 10 01:10:17.657218 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jul 10 01:10:17.657223 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jul 10 01:10:17.657228 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jul 10 01:10:17.657233 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jul 10 01:10:17.657239 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jul 10 01:10:17.657245 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jul 10 01:10:17.657250 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jul 10 01:10:17.657255 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jul 10 01:10:17.657260 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jul 10 01:10:17.657265 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jul 10 01:10:17.657271 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jul 10 01:10:17.657276 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jul 10 01:10:17.657281 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jul 10 01:10:17.657286 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jul 10 01:10:17.657291 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jul 10 01:10:17.657297 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jul 10 01:10:17.657302 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jul 10 01:10:17.657308 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jul 10 01:10:17.657313 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jul 10 01:10:17.657318 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jul 10 01:10:17.657323 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jul 10 01:10:17.658722 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jul 10 01:10:17.658734 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jul 10 01:10:17.658744 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jul 10 01:10:17.658753 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jul 10 01:10:17.658758 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jul 10 01:10:17.658764 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jul 10 01:10:17.658769 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jul 10 01:10:17.658774 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jul 10 01:10:17.658779 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jul 10 01:10:17.658785 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jul 10 01:10:17.658790 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jul 10 01:10:17.658795 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jul 10 01:10:17.658800 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jul 10 01:10:17.658807 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jul 10 01:10:17.658812 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jul 10 01:10:17.658817 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jul 10 01:10:17.658822 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jul 10 01:10:17.658828 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jul 10 01:10:17.658833 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jul 10 01:10:17.658838 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jul 10 01:10:17.658843 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jul 10 01:10:17.658848 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jul 10 01:10:17.658858 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jul 10 01:10:17.658863 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jul 10 01:10:17.658868 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jul 10 01:10:17.658874 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jul 10 01:10:17.658879 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jul 10 01:10:17.658884 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jul 10 01:10:17.658889 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jul 10 01:10:17.658895 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jul 10 01:10:17.658900 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jul 10 01:10:17.658905 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jul 10 01:10:17.658911 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jul 10 01:10:17.658917 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jul 10 01:10:17.658922 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jul 10 01:10:17.658927 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jul 10 01:10:17.658932 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jul 10 01:10:17.658937 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jul 10 01:10:17.658943 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jul 10 01:10:17.658948 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jul 10 01:10:17.658953 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jul 10 01:10:17.658959 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jul 10 01:10:17.658965 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jul 10 01:10:17.658970 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jul 10 01:10:17.658975 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jul 10 01:10:17.658980 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jul 10 01:10:17.658985 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jul 10 01:10:17.658990 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jul 10 01:10:17.658996 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jul 10 01:10:17.659001 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jul 10 01:10:17.659006 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jul 10 01:10:17.659013 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jul 10 01:10:17.659018 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jul 10 01:10:17.659023 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jul 10 01:10:17.659029 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jul 10 01:10:17.659034 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jul 10 01:10:17.659039 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jul 10 01:10:17.659045 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jul 10 01:10:17.659050 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jul 10 01:10:17.659055 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jul 10 01:10:17.659061 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jul 10 01:10:17.659066 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jul 10 01:10:17.659072 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jul 10 01:10:17.659077 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jul 10 01:10:17.659082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jul 10 01:10:17.659088 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 10 01:10:17.659093 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jul 10 01:10:17.659098 kernel: TSC deadline timer available Jul 10 01:10:17.659103 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jul 10 01:10:17.659110 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jul 10 01:10:17.659115 kernel: Booting paravirtualized kernel on VMware hypervisor Jul 10 01:10:17.659120 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 10 01:10:17.659126 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:128 nr_node_ids:1 Jul 10 01:10:17.659132 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Jul 10 01:10:17.659137 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Jul 10 01:10:17.659142 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jul 10 01:10:17.659148 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jul 10 01:10:17.659153 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jul 10 01:10:17.659159 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jul 10 01:10:17.659164 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jul 10 01:10:17.659169 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jul 10 01:10:17.659175 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jul 10 01:10:17.659187 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jul 10 01:10:17.659193 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jul 10 01:10:17.659199 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jul 10 01:10:17.659204 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jul 10 01:10:17.659209 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jul 10 01:10:17.659216 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jul 10 01:10:17.659221 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jul 10 01:10:17.659227 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jul 10 01:10:17.659232 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jul 10 01:10:17.659238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jul 10 01:10:17.659244 kernel: Policy zone: DMA32 Jul 10 01:10:17.659251 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 01:10:17.659257 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 10 01:10:17.659263 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jul 10 01:10:17.659269 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jul 10 01:10:17.659275 kernel: printk: log_buf_len min size: 262144 bytes Jul 10 01:10:17.659281 kernel: printk: log_buf_len: 1048576 bytes Jul 10 01:10:17.659287 kernel: printk: early log buf free: 239728(91%) Jul 10 01:10:17.659292 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 10 01:10:17.659298 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 10 01:10:17.659304 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 10 01:10:17.659310 kernel: Memory: 1940392K/2096628K available (12295K kernel code, 2275K rwdata, 13724K rodata, 47472K init, 4108K bss, 155976K reserved, 0K cma-reserved) Jul 10 01:10:17.659316 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jul 10 01:10:17.659322 kernel: ftrace: allocating 34602 entries in 136 pages Jul 10 01:10:17.659336 kernel: ftrace: allocated 136 pages with 2 groups Jul 10 01:10:17.659343 kernel: rcu: Hierarchical RCU implementation. Jul 10 01:10:17.659350 kernel: rcu: RCU event tracing is enabled. Jul 10 01:10:17.659357 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jul 10 01:10:17.659362 kernel: Rude variant of Tasks RCU enabled. Jul 10 01:10:17.659368 kernel: Tracing variant of Tasks RCU enabled. Jul 10 01:10:17.659374 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 10 01:10:17.659379 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jul 10 01:10:17.659385 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jul 10 01:10:17.659391 kernel: random: crng init done Jul 10 01:10:17.659396 kernel: Console: colour VGA+ 80x25 Jul 10 01:10:17.659402 kernel: printk: console [tty0] enabled Jul 10 01:10:17.659407 kernel: printk: console [ttyS0] enabled Jul 10 01:10:17.659414 kernel: ACPI: Core revision 20210730 Jul 10 01:10:17.659420 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jul 10 01:10:17.659426 kernel: APIC: Switch to symmetric I/O mode setup Jul 10 01:10:17.659432 kernel: x2apic enabled Jul 10 01:10:17.659437 kernel: Switched APIC routing to physical x2apic. Jul 10 01:10:17.659443 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 10 01:10:17.659449 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 10 01:10:17.659455 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jul 10 01:10:17.659460 kernel: Disabled fast string operations Jul 10 01:10:17.659467 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 10 01:10:17.659472 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jul 10 01:10:17.659478 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 10 01:10:17.659484 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Jul 10 01:10:17.659490 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 10 01:10:17.659496 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 10 01:10:17.659501 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jul 10 01:10:17.659507 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jul 10 01:10:17.659514 kernel: RETBleed: Mitigation: Enhanced IBRS Jul 10 01:10:17.659521 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 10 01:10:17.659530 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 10 01:10:17.659539 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 10 01:10:17.659545 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 10 01:10:17.659551 kernel: GDS: Unknown: Dependent on hypervisor status Jul 10 01:10:17.659556 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 10 01:10:17.659562 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 10 01:10:17.659568 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 10 01:10:17.659575 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 10 01:10:17.659580 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 10 01:10:17.659586 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 10 01:10:17.659592 kernel: Freeing SMP alternatives memory: 32K Jul 10 01:10:17.659597 kernel: pid_max: default: 131072 minimum: 1024 Jul 10 01:10:17.659603 kernel: LSM: Security Framework initializing Jul 10 01:10:17.659612 kernel: SELinux: Initializing. Jul 10 01:10:17.659621 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 01:10:17.659629 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 10 01:10:17.659640 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jul 10 01:10:17.659646 kernel: Performance Events: Skylake events, core PMU driver. Jul 10 01:10:17.659651 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jul 10 01:10:17.659657 kernel: core: CPUID marked event: 'instructions' unavailable Jul 10 01:10:17.659664 kernel: core: CPUID marked event: 'bus cycles' unavailable Jul 10 01:10:17.659670 kernel: core: CPUID marked event: 'cache references' unavailable Jul 10 01:10:17.659675 kernel: core: CPUID marked event: 'cache misses' unavailable Jul 10 01:10:17.659680 kernel: core: CPUID marked event: 'branch instructions' unavailable Jul 10 01:10:17.659686 kernel: core: CPUID marked event: 'branch misses' unavailable Jul 10 01:10:17.659692 kernel: ... version: 1 Jul 10 01:10:17.659698 kernel: ... bit width: 48 Jul 10 01:10:17.659704 kernel: ... generic registers: 4 Jul 10 01:10:17.659709 kernel: ... value mask: 0000ffffffffffff Jul 10 01:10:17.659715 kernel: ... max period: 000000007fffffff Jul 10 01:10:17.659721 kernel: ... fixed-purpose events: 0 Jul 10 01:10:17.659726 kernel: ... event mask: 000000000000000f Jul 10 01:10:17.659732 kernel: signal: max sigframe size: 1776 Jul 10 01:10:17.659738 kernel: rcu: Hierarchical SRCU implementation. Jul 10 01:10:17.659744 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 10 01:10:17.659750 kernel: smp: Bringing up secondary CPUs ... Jul 10 01:10:17.659755 kernel: x86: Booting SMP configuration: Jul 10 01:10:17.659761 kernel: .... node #0, CPUs: #1 Jul 10 01:10:17.659766 kernel: Disabled fast string operations Jul 10 01:10:17.659772 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jul 10 01:10:17.659778 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 10 01:10:17.659783 kernel: smp: Brought up 1 node, 2 CPUs Jul 10 01:10:17.659789 kernel: smpboot: Max logical packages: 128 Jul 10 01:10:17.659794 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jul 10 01:10:17.659801 kernel: devtmpfs: initialized Jul 10 01:10:17.659806 kernel: x86/mm: Memory block size: 128MB Jul 10 01:10:17.659812 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jul 10 01:10:17.659818 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 10 01:10:17.659824 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jul 10 01:10:17.659830 kernel: pinctrl core: initialized pinctrl subsystem Jul 10 01:10:17.659835 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 10 01:10:17.659841 kernel: audit: initializing netlink subsys (disabled) Jul 10 01:10:17.659847 kernel: audit: type=2000 audit(1752109816.086:1): state=initialized audit_enabled=0 res=1 Jul 10 01:10:17.659853 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 10 01:10:17.659859 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 10 01:10:17.659865 kernel: cpuidle: using governor menu Jul 10 01:10:17.659870 kernel: Simple Boot Flag at 0x36 set to 0x80 Jul 10 01:10:17.659876 kernel: ACPI: bus type PCI registered Jul 10 01:10:17.659881 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 10 01:10:17.659887 kernel: dca service started, version 1.12.1 Jul 10 01:10:17.659893 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jul 10 01:10:17.659899 kernel: PCI: MMCONFIG at [mem 0xf0000000-0xf7ffffff] reserved in E820 Jul 10 01:10:17.659905 kernel: PCI: Using configuration type 1 for base access Jul 10 01:10:17.659911 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 10 01:10:17.659916 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 10 01:10:17.659922 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 10 01:10:17.659927 kernel: ACPI: Added _OSI(Module Device) Jul 10 01:10:17.659933 kernel: ACPI: Added _OSI(Processor Device) Jul 10 01:10:17.659939 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 10 01:10:17.659944 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 10 01:10:17.659950 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 10 01:10:17.659956 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 10 01:10:17.659962 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 10 01:10:17.659967 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jul 10 01:10:17.659973 kernel: ACPI: Interpreter enabled Jul 10 01:10:17.659979 kernel: ACPI: PM: (supports S0 S1 S5) Jul 10 01:10:17.659985 kernel: ACPI: Using IOAPIC for interrupt routing Jul 10 01:10:17.659990 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 10 01:10:17.659996 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jul 10 01:10:17.660003 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jul 10 01:10:17.660078 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 10 01:10:17.660129 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jul 10 01:10:17.660175 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jul 10 01:10:17.660183 kernel: PCI host bridge to bus 0000:00 Jul 10 01:10:17.660230 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 10 01:10:17.660273 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jul 10 01:10:17.660317 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 10 01:10:17.664484 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 10 01:10:17.664535 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jul 10 01:10:17.664577 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jul 10 01:10:17.664634 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jul 10 01:10:17.664689 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jul 10 01:10:17.664744 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jul 10 01:10:17.664797 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jul 10 01:10:17.664845 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jul 10 01:10:17.664892 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jul 10 01:10:17.664939 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jul 10 01:10:17.664985 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jul 10 01:10:17.665031 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jul 10 01:10:17.665088 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jul 10 01:10:17.665135 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jul 10 01:10:17.665182 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jul 10 01:10:17.665232 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jul 10 01:10:17.665279 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jul 10 01:10:17.665327 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jul 10 01:10:17.665396 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jul 10 01:10:17.665450 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jul 10 01:10:17.665511 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jul 10 01:10:17.665558 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jul 10 01:10:17.665604 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jul 10 01:10:17.665650 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 10 01:10:17.665701 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jul 10 01:10:17.665755 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.665803 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.665855 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.665904 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.665956 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.666003 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.666056 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.666104 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.666156 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.666218 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.666272 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.666319 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.674587 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.674658 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.674714 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.674764 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.674816 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.674864 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.674917 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.674964 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675015 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675062 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675113 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675160 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675212 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675259 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675314 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675374 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675427 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675491 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675544 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675592 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675642 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675689 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675740 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675789 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675839 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675889 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.675939 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.675987 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676037 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676085 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676139 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676189 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676240 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676287 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676363 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676412 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676463 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676513 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676563 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676611 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676662 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676708 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676758 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676809 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676858 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.676907 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.676957 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.677014 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.677071 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.677118 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.677172 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jul 10 01:10:17.677220 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.677272 kernel: pci_bus 0000:01: extended config space not accessible Jul 10 01:10:17.677322 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 10 01:10:17.677402 kernel: pci_bus 0000:02: extended config space not accessible Jul 10 01:10:17.677416 kernel: acpiphp: Slot [32] registered Jul 10 01:10:17.677428 kernel: acpiphp: Slot [33] registered Jul 10 01:10:17.677437 kernel: acpiphp: Slot [34] registered Jul 10 01:10:17.677446 kernel: acpiphp: Slot [35] registered Jul 10 01:10:17.677454 kernel: acpiphp: Slot [36] registered Jul 10 01:10:17.677461 kernel: acpiphp: Slot [37] registered Jul 10 01:10:17.677466 kernel: acpiphp: Slot [38] registered Jul 10 01:10:17.677472 kernel: acpiphp: Slot [39] registered Jul 10 01:10:17.677478 kernel: acpiphp: Slot [40] registered Jul 10 01:10:17.677483 kernel: acpiphp: Slot [41] registered Jul 10 01:10:17.677491 kernel: acpiphp: Slot [42] registered Jul 10 01:10:17.677497 kernel: acpiphp: Slot [43] registered Jul 10 01:10:17.677502 kernel: acpiphp: Slot [44] registered Jul 10 01:10:17.677508 kernel: acpiphp: Slot [45] registered Jul 10 01:10:17.677514 kernel: acpiphp: Slot [46] registered Jul 10 01:10:17.677520 kernel: acpiphp: Slot [47] registered Jul 10 01:10:17.677525 kernel: acpiphp: Slot [48] registered Jul 10 01:10:17.677531 kernel: acpiphp: Slot [49] registered Jul 10 01:10:17.677536 kernel: acpiphp: Slot [50] registered Jul 10 01:10:17.677542 kernel: acpiphp: Slot [51] registered Jul 10 01:10:17.677549 kernel: acpiphp: Slot [52] registered Jul 10 01:10:17.677555 kernel: acpiphp: Slot [53] registered Jul 10 01:10:17.677560 kernel: acpiphp: Slot [54] registered Jul 10 01:10:17.677566 kernel: acpiphp: Slot [55] registered Jul 10 01:10:17.677572 kernel: acpiphp: Slot [56] registered Jul 10 01:10:17.677577 kernel: acpiphp: Slot [57] registered Jul 10 01:10:17.677583 kernel: acpiphp: Slot [58] registered Jul 10 01:10:17.677588 kernel: acpiphp: Slot [59] registered Jul 10 01:10:17.677594 kernel: acpiphp: Slot [60] registered Jul 10 01:10:17.677601 kernel: acpiphp: Slot [61] registered Jul 10 01:10:17.677606 kernel: acpiphp: Slot [62] registered Jul 10 01:10:17.677612 kernel: acpiphp: Slot [63] registered Jul 10 01:10:17.677666 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jul 10 01:10:17.677722 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 10 01:10:17.677778 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 10 01:10:17.677826 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 01:10:17.677872 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jul 10 01:10:17.677922 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jul 10 01:10:17.677969 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jul 10 01:10:17.678026 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jul 10 01:10:17.678073 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jul 10 01:10:17.678127 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jul 10 01:10:17.678176 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jul 10 01:10:17.678225 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jul 10 01:10:17.678277 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 10 01:10:17.678325 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jul 10 01:10:17.678507 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 10 01:10:17.678558 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 10 01:10:17.678604 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 10 01:10:17.678652 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 10 01:10:17.678702 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 10 01:10:17.678750 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 10 01:10:17.678799 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 10 01:10:17.678852 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 01:10:17.678910 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 10 01:10:17.678957 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 10 01:10:17.679003 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 10 01:10:17.683162 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 01:10:17.683216 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 10 01:10:17.683264 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 10 01:10:17.683314 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 01:10:17.683403 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 10 01:10:17.683453 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 10 01:10:17.683500 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 01:10:17.683552 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 10 01:10:17.683599 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 10 01:10:17.683645 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 01:10:17.683694 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 10 01:10:17.683741 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 10 01:10:17.683788 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 01:10:17.684083 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 10 01:10:17.684136 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 10 01:10:17.684188 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 01:10:17.684243 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jul 10 01:10:17.684532 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jul 10 01:10:17.684594 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jul 10 01:10:17.684647 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jul 10 01:10:17.684697 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jul 10 01:10:17.684746 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jul 10 01:10:17.684820 kernel: pci 0000:0b:00.0: supports D1 D2 Jul 10 01:10:17.685079 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 10 01:10:17.685132 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jul 10 01:10:17.685183 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 10 01:10:17.685232 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 10 01:10:17.685297 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 10 01:10:17.685593 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 10 01:10:17.685652 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 10 01:10:17.685706 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 10 01:10:17.685754 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 01:10:17.685804 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 10 01:10:17.685853 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 10 01:10:17.685899 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 10 01:10:17.685947 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 01:10:17.685996 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 10 01:10:17.686045 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 10 01:10:17.686094 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 01:10:17.686143 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 10 01:10:17.686190 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 10 01:10:17.686238 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 01:10:17.686286 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 10 01:10:17.686344 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 10 01:10:17.686393 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 01:10:17.686441 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 10 01:10:17.686506 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 10 01:10:17.686560 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 01:10:17.686610 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 10 01:10:17.686657 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 10 01:10:17.686704 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 01:10:17.686751 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 10 01:10:17.686799 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 10 01:10:17.687014 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 10 01:10:17.687081 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 01:10:17.687137 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 10 01:10:17.687185 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 10 01:10:17.687233 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 10 01:10:17.687279 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 01:10:17.687340 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 10 01:10:17.687392 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 10 01:10:17.687442 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 10 01:10:17.687489 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 01:10:17.687537 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 10 01:10:17.687600 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 10 01:10:17.687672 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 01:10:17.687722 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 10 01:10:17.687769 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 10 01:10:17.687830 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 01:10:17.687883 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 10 01:10:17.687930 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 10 01:10:17.687976 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 01:10:17.688024 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 10 01:10:17.688070 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 10 01:10:17.688117 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 01:10:17.688166 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 10 01:10:17.688212 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 10 01:10:17.688261 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 01:10:17.688310 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 10 01:10:17.688365 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 10 01:10:17.688412 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 10 01:10:17.688463 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 01:10:17.688513 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 10 01:10:17.688560 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 10 01:10:17.688607 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 10 01:10:17.688655 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 01:10:17.688704 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 10 01:10:17.689063 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 10 01:10:17.689120 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 01:10:17.689170 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 10 01:10:17.689220 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 10 01:10:17.689268 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 01:10:17.689318 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 10 01:10:17.689443 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 10 01:10:17.689491 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 01:10:17.689541 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 10 01:10:17.689750 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 10 01:10:17.689800 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 01:10:17.689849 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 10 01:10:17.689897 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 10 01:10:17.690234 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 01:10:17.690308 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 10 01:10:17.690368 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 10 01:10:17.690418 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 01:10:17.693850 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jul 10 01:10:17.693863 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jul 10 01:10:17.693869 kernel: ACPI: PCI: Interrupt link LNKB disabled Jul 10 01:10:17.693875 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 10 01:10:17.693881 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jul 10 01:10:17.693889 kernel: iommu: Default domain type: Translated Jul 10 01:10:17.693895 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 10 01:10:17.693972 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jul 10 01:10:17.694034 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 10 01:10:17.694090 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jul 10 01:10:17.694099 kernel: vgaarb: loaded Jul 10 01:10:17.694106 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 10 01:10:17.694112 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 10 01:10:17.694117 kernel: PTP clock support registered Jul 10 01:10:17.694125 kernel: PCI: Using ACPI for IRQ routing Jul 10 01:10:17.694131 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 10 01:10:17.694137 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jul 10 01:10:17.694142 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jul 10 01:10:17.694148 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jul 10 01:10:17.694154 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jul 10 01:10:17.694160 kernel: clocksource: Switched to clocksource tsc-early Jul 10 01:10:17.694165 kernel: VFS: Disk quotas dquot_6.6.0 Jul 10 01:10:17.694172 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 10 01:10:17.694178 kernel: pnp: PnP ACPI init Jul 10 01:10:17.694231 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jul 10 01:10:17.694277 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jul 10 01:10:17.694320 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jul 10 01:10:17.694383 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jul 10 01:10:17.694432 kernel: pnp 00:06: [dma 2] Jul 10 01:10:17.694485 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jul 10 01:10:17.694532 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jul 10 01:10:17.694576 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jul 10 01:10:17.694584 kernel: pnp: PnP ACPI: found 8 devices Jul 10 01:10:17.694590 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 10 01:10:17.694596 kernel: NET: Registered PF_INET protocol family Jul 10 01:10:17.694602 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 10 01:10:17.694608 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 10 01:10:17.694615 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 10 01:10:17.694621 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 10 01:10:17.694627 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Jul 10 01:10:17.694632 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 10 01:10:17.694638 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 01:10:17.694644 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 10 01:10:17.694650 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 10 01:10:17.694656 kernel: NET: Registered PF_XDP protocol family Jul 10 01:10:17.694714 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jul 10 01:10:17.694768 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 10 01:10:17.694831 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 10 01:10:17.694884 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 10 01:10:17.694933 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 10 01:10:17.694981 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jul 10 01:10:17.695030 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jul 10 01:10:17.695081 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jul 10 01:10:17.695130 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jul 10 01:10:17.695183 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jul 10 01:10:17.695235 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jul 10 01:10:17.695285 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jul 10 01:10:17.695345 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jul 10 01:10:17.695401 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jul 10 01:10:17.695468 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jul 10 01:10:17.695522 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jul 10 01:10:17.695570 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jul 10 01:10:17.695620 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jul 10 01:10:17.695678 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jul 10 01:10:17.695730 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jul 10 01:10:17.695779 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jul 10 01:10:17.695828 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jul 10 01:10:17.695877 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jul 10 01:10:17.695934 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 01:10:17.695984 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 01:10:17.696037 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.696087 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.696135 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.696194 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.696255 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.696304 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.698429 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.698498 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.698551 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.698607 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.698686 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.698739 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.698789 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.698837 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.698885 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.698942 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.698993 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.699047 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.699097 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.699145 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.699193 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.699240 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.699287 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.699640 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.699705 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.699763 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.699819 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.699867 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.699916 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.699971 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700020 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700074 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700125 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700172 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700225 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700277 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700326 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700391 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700445 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700493 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700544 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700591 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700639 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700687 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700735 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700783 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700837 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700884 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.700932 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.700991 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.701041 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.701088 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.701135 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.701182 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.701229 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.701277 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.701325 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.701827 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.701885 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.701935 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.701984 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702047 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702096 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702144 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702191 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702245 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702299 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702426 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702485 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702546 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702606 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702660 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702718 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702766 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702814 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702867 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.702921 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.702972 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.703019 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.703067 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.703114 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.703172 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.703220 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jul 10 01:10:17.703267 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jul 10 01:10:17.703315 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jul 10 01:10:17.703389 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jul 10 01:10:17.703446 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jul 10 01:10:17.703502 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jul 10 01:10:17.703550 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 01:10:17.703604 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jul 10 01:10:17.703660 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jul 10 01:10:17.703709 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jul 10 01:10:17.703755 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jul 10 01:10:17.703806 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 01:10:17.703864 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jul 10 01:10:17.703913 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jul 10 01:10:17.703966 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jul 10 01:10:17.704014 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 01:10:17.704063 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jul 10 01:10:17.704112 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jul 10 01:10:17.704159 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jul 10 01:10:17.704213 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 01:10:17.704260 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jul 10 01:10:17.704308 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jul 10 01:10:17.704691 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 01:10:17.704748 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jul 10 01:10:17.704799 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jul 10 01:10:17.704850 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 01:10:17.704909 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jul 10 01:10:17.704957 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jul 10 01:10:17.705008 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 01:10:17.705056 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jul 10 01:10:17.705108 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jul 10 01:10:17.705158 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 01:10:17.705206 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jul 10 01:10:17.705254 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jul 10 01:10:17.705301 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 01:10:17.705386 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jul 10 01:10:17.705440 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jul 10 01:10:17.705501 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jul 10 01:10:17.705554 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jul 10 01:10:17.705604 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 01:10:17.705660 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jul 10 01:10:17.705711 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jul 10 01:10:17.705762 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jul 10 01:10:17.705809 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 01:10:17.705858 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jul 10 01:10:17.705907 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jul 10 01:10:17.705958 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jul 10 01:10:17.706006 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 01:10:17.706055 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jul 10 01:10:17.706103 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jul 10 01:10:17.706151 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 01:10:17.706198 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jul 10 01:10:17.706247 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jul 10 01:10:17.706300 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 01:10:17.706372 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jul 10 01:10:17.706423 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jul 10 01:10:17.706475 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 01:10:17.706524 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jul 10 01:10:17.706572 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jul 10 01:10:17.706626 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 01:10:17.706676 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jul 10 01:10:17.706725 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jul 10 01:10:17.706785 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 01:10:17.706840 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jul 10 01:10:17.706889 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jul 10 01:10:17.706940 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jul 10 01:10:17.706987 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 01:10:17.707037 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jul 10 01:10:17.707085 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jul 10 01:10:17.707141 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jul 10 01:10:17.707190 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 01:10:17.707245 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jul 10 01:10:17.707307 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jul 10 01:10:17.707410 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jul 10 01:10:17.707459 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 01:10:17.707510 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jul 10 01:10:17.707557 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jul 10 01:10:17.707604 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 01:10:17.707659 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jul 10 01:10:17.707707 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jul 10 01:10:17.707754 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 01:10:17.707802 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jul 10 01:10:17.707850 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jul 10 01:10:17.707897 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 01:10:17.707948 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jul 10 01:10:17.707999 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jul 10 01:10:17.708050 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 01:10:17.708097 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jul 10 01:10:17.708144 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jul 10 01:10:17.708191 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 01:10:17.708246 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jul 10 01:10:17.708295 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jul 10 01:10:17.708354 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jul 10 01:10:17.708420 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 01:10:17.708480 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jul 10 01:10:17.708528 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jul 10 01:10:17.708576 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jul 10 01:10:17.708623 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 01:10:17.708679 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jul 10 01:10:17.708736 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jul 10 01:10:17.708784 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 01:10:17.708843 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jul 10 01:10:17.708892 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jul 10 01:10:17.708942 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 01:10:17.708991 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jul 10 01:10:17.709176 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jul 10 01:10:17.709227 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 01:10:17.709278 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jul 10 01:10:17.709327 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jul 10 01:10:17.709413 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 01:10:17.709613 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jul 10 01:10:17.709664 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jul 10 01:10:17.709713 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 01:10:17.710058 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jul 10 01:10:17.710115 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jul 10 01:10:17.710227 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 01:10:17.710426 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jul 10 01:10:17.710566 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 10 01:10:17.710619 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 10 01:10:17.710662 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jul 10 01:10:17.710714 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jul 10 01:10:17.710765 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jul 10 01:10:17.710811 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jul 10 01:10:17.710856 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jul 10 01:10:17.710899 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jul 10 01:10:17.710944 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jul 10 01:10:17.710994 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jul 10 01:10:17.711070 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jul 10 01:10:17.711129 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jul 10 01:10:17.711211 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jul 10 01:10:17.711257 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jul 10 01:10:17.711302 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jul 10 01:10:17.711379 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jul 10 01:10:17.711454 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jul 10 01:10:17.711513 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jul 10 01:10:17.711568 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jul 10 01:10:17.711613 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jul 10 01:10:17.711657 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jul 10 01:10:17.711706 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jul 10 01:10:17.711762 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jul 10 01:10:17.711818 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jul 10 01:10:17.711867 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jul 10 01:10:17.711920 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jul 10 01:10:17.711987 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jul 10 01:10:17.712040 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jul 10 01:10:17.712086 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jul 10 01:10:17.712457 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jul 10 01:10:17.712511 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jul 10 01:10:17.712565 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jul 10 01:10:17.712909 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jul 10 01:10:17.712966 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jul 10 01:10:17.713017 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jul 10 01:10:17.713407 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jul 10 01:10:17.713469 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jul 10 01:10:17.713530 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jul 10 01:10:17.713728 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jul 10 01:10:17.713779 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jul 10 01:10:17.713829 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jul 10 01:10:17.714164 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jul 10 01:10:17.714228 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jul 10 01:10:17.714286 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jul 10 01:10:17.714680 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jul 10 01:10:17.714735 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jul 10 01:10:17.714787 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jul 10 01:10:17.714842 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jul 10 01:10:17.714892 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jul 10 01:10:17.714937 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jul 10 01:10:17.714996 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jul 10 01:10:17.715042 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jul 10 01:10:17.715089 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jul 10 01:10:17.715138 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jul 10 01:10:17.715183 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jul 10 01:10:17.715239 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jul 10 01:10:17.715292 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jul 10 01:10:17.715351 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jul 10 01:10:17.715398 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jul 10 01:10:17.715462 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jul 10 01:10:17.715514 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jul 10 01:10:17.715564 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jul 10 01:10:17.715615 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jul 10 01:10:17.715673 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jul 10 01:10:17.715719 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jul 10 01:10:17.715777 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jul 10 01:10:17.715828 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jul 10 01:10:17.715876 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jul 10 01:10:17.715922 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jul 10 01:10:17.715974 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jul 10 01:10:17.716022 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jul 10 01:10:17.716080 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jul 10 01:10:17.716133 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jul 10 01:10:17.716192 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jul 10 01:10:17.716241 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jul 10 01:10:17.716290 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jul 10 01:10:17.716376 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jul 10 01:10:17.716430 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jul 10 01:10:17.716475 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jul 10 01:10:17.716523 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jul 10 01:10:17.716867 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jul 10 01:10:17.716931 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jul 10 01:10:17.716990 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jul 10 01:10:17.717348 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jul 10 01:10:17.717405 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jul 10 01:10:17.717459 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jul 10 01:10:17.717702 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jul 10 01:10:17.717761 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 10 01:10:17.717774 kernel: PCI: CLS 32 bytes, default 64 Jul 10 01:10:17.717781 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 10 01:10:17.717787 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jul 10 01:10:17.717794 kernel: clocksource: Switched to clocksource tsc Jul 10 01:10:17.717800 kernel: Initialise system trusted keyrings Jul 10 01:10:17.717806 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 10 01:10:17.717813 kernel: Key type asymmetric registered Jul 10 01:10:17.717819 kernel: Asymmetric key parser 'x509' registered Jul 10 01:10:17.717826 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 10 01:10:17.718111 kernel: io scheduler mq-deadline registered Jul 10 01:10:17.718122 kernel: io scheduler kyber registered Jul 10 01:10:17.718128 kernel: io scheduler bfq registered Jul 10 01:10:17.718198 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jul 10 01:10:17.718272 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.718457 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jul 10 01:10:17.718521 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.718576 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jul 10 01:10:17.718916 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.718973 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jul 10 01:10:17.719026 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719083 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jul 10 01:10:17.719133 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719192 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jul 10 01:10:17.719242 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719298 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jul 10 01:10:17.719370 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719423 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jul 10 01:10:17.719473 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719531 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jul 10 01:10:17.719582 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719631 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jul 10 01:10:17.719679 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719731 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jul 10 01:10:17.719787 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719851 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jul 10 01:10:17.719905 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.719960 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jul 10 01:10:17.720017 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.720069 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jul 10 01:10:17.720142 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.720315 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jul 10 01:10:17.720410 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.720466 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jul 10 01:10:17.720806 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.720869 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jul 10 01:10:17.720923 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.721053 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jul 10 01:10:17.721192 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.721326 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jul 10 01:10:17.721394 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.721454 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jul 10 01:10:17.721511 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.721571 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jul 10 01:10:17.721628 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.721678 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jul 10 01:10:17.721781 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.722188 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jul 10 01:10:17.722258 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.722318 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jul 10 01:10:17.722688 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.722748 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jul 10 01:10:17.722811 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.722871 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jul 10 01:10:17.722921 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.722973 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jul 10 01:10:17.723022 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.723079 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jul 10 01:10:17.723129 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.723178 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jul 10 01:10:17.723229 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.723286 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jul 10 01:10:17.723343 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.723394 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jul 10 01:10:17.723443 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.723497 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jul 10 01:10:17.723556 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jul 10 01:10:17.723566 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 10 01:10:17.723572 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 10 01:10:17.723579 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 10 01:10:17.723585 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jul 10 01:10:17.723591 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 10 01:10:17.723598 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 10 01:10:17.723656 kernel: rtc_cmos 00:01: registered as rtc0 Jul 10 01:10:17.723705 kernel: rtc_cmos 00:01: setting system clock to 2025-07-10T01:10:17 UTC (1752109817) Jul 10 01:10:17.723755 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jul 10 01:10:17.723764 kernel: intel_pstate: CPU model not supported Jul 10 01:10:17.723770 kernel: NET: Registered PF_INET6 protocol family Jul 10 01:10:17.723777 kernel: Segment Routing with IPv6 Jul 10 01:10:17.723782 kernel: In-situ OAM (IOAM) with IPv6 Jul 10 01:10:17.723789 kernel: NET: Registered PF_PACKET protocol family Jul 10 01:10:17.723797 kernel: Key type dns_resolver registered Jul 10 01:10:17.723803 kernel: IPI shorthand broadcast: enabled Jul 10 01:10:17.723809 kernel: sched_clock: Marking stable (873369013, 228231407)->(1168287169, -66686749) Jul 10 01:10:17.723817 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 10 01:10:17.723823 kernel: registered taskstats version 1 Jul 10 01:10:17.723829 kernel: Loading compiled-in X.509 certificates Jul 10 01:10:17.723835 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.186-flatcar: 6ebecdd7757c0df63fc51731f0b99957f4e4af16' Jul 10 01:10:17.723843 kernel: Key type .fscrypt registered Jul 10 01:10:17.723851 kernel: Key type fscrypt-provisioning registered Jul 10 01:10:17.723868 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 10 01:10:17.723874 kernel: ima: Allocated hash algorithm: sha1 Jul 10 01:10:17.723880 kernel: ima: No architecture policies found Jul 10 01:10:17.723887 kernel: clk: Disabling unused clocks Jul 10 01:10:17.723893 kernel: Freeing unused kernel image (initmem) memory: 47472K Jul 10 01:10:17.723899 kernel: Write protecting the kernel read-only data: 28672k Jul 10 01:10:17.723905 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 10 01:10:17.723911 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K Jul 10 01:10:17.723919 kernel: Run /init as init process Jul 10 01:10:17.723926 kernel: with arguments: Jul 10 01:10:17.723935 kernel: /init Jul 10 01:10:17.723941 kernel: with environment: Jul 10 01:10:17.723947 kernel: HOME=/ Jul 10 01:10:17.723953 kernel: TERM=linux Jul 10 01:10:17.723959 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 10 01:10:17.723967 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 01:10:17.723975 systemd[1]: Detected virtualization vmware. Jul 10 01:10:17.723986 systemd[1]: Detected architecture x86-64. Jul 10 01:10:17.723995 systemd[1]: Running in initrd. Jul 10 01:10:17.724002 systemd[1]: No hostname configured, using default hostname. Jul 10 01:10:17.724008 systemd[1]: Hostname set to . Jul 10 01:10:17.724014 systemd[1]: Initializing machine ID from random generator. Jul 10 01:10:17.724020 systemd[1]: Queued start job for default target initrd.target. Jul 10 01:10:17.724027 systemd[1]: Started systemd-ask-password-console.path. Jul 10 01:10:17.724032 systemd[1]: Reached target cryptsetup.target. Jul 10 01:10:17.724040 systemd[1]: Reached target paths.target. Jul 10 01:10:17.724046 systemd[1]: Reached target slices.target. Jul 10 01:10:17.724173 systemd[1]: Reached target swap.target. Jul 10 01:10:17.724182 systemd[1]: Reached target timers.target. Jul 10 01:10:17.724189 systemd[1]: Listening on iscsid.socket. Jul 10 01:10:17.724195 systemd[1]: Listening on iscsiuio.socket. Jul 10 01:10:17.724201 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 01:10:17.724208 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 01:10:17.724216 systemd[1]: Listening on systemd-journald.socket. Jul 10 01:10:17.724222 systemd[1]: Listening on systemd-networkd.socket. Jul 10 01:10:17.724228 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 01:10:17.724235 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 01:10:17.724241 systemd[1]: Reached target sockets.target. Jul 10 01:10:17.724247 systemd[1]: Starting kmod-static-nodes.service... Jul 10 01:10:17.724253 systemd[1]: Finished network-cleanup.service. Jul 10 01:10:17.724259 systemd[1]: Starting systemd-fsck-usr.service... Jul 10 01:10:17.724265 systemd[1]: Starting systemd-journald.service... Jul 10 01:10:17.724273 systemd[1]: Starting systemd-modules-load.service... Jul 10 01:10:17.724279 systemd[1]: Starting systemd-resolved.service... Jul 10 01:10:17.724287 systemd[1]: Starting systemd-vconsole-setup.service... Jul 10 01:10:17.724298 systemd[1]: Finished kmod-static-nodes.service. Jul 10 01:10:17.724305 systemd[1]: Finished systemd-fsck-usr.service. Jul 10 01:10:17.724312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 01:10:17.724321 systemd[1]: Finished systemd-vconsole-setup.service. Jul 10 01:10:17.724575 kernel: audit: type=1130 audit(1752109817.659:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.724588 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 01:10:17.724596 kernel: audit: type=1130 audit(1752109817.665:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.724602 systemd[1]: Starting dracut-cmdline-ask.service... Jul 10 01:10:17.724609 systemd[1]: Started systemd-resolved.service. Jul 10 01:10:17.724615 kernel: audit: type=1130 audit(1752109817.678:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.724623 systemd[1]: Reached target nss-lookup.target. Jul 10 01:10:17.724629 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 10 01:10:17.724636 systemd[1]: Finished dracut-cmdline-ask.service. Jul 10 01:10:17.724643 systemd[1]: Starting dracut-cmdline.service... Jul 10 01:10:17.724651 kernel: audit: type=1130 audit(1752109817.689:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.724657 kernel: Bridge firewalling registered Jul 10 01:10:17.724663 kernel: SCSI subsystem initialized Jul 10 01:10:17.724669 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 10 01:10:17.724676 kernel: device-mapper: uevent: version 1.0.3 Jul 10 01:10:17.724686 systemd-journald[216]: Journal started Jul 10 01:10:17.724721 systemd-journald[216]: Runtime Journal (/run/log/journal/39a66dc3b1f443da8bfe719bff3a2fd0) is 4.8M, max 38.8M, 34.0M free. Jul 10 01:10:17.725957 systemd[1]: Started systemd-journald.service. Jul 10 01:10:17.725976 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 10 01:10:17.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.655040 systemd-modules-load[217]: Inserted module 'overlay' Jul 10 01:10:17.730701 kernel: audit: type=1130 audit(1752109817.725:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.676624 systemd-resolved[218]: Positive Trust Anchors: Jul 10 01:10:17.676629 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 01:10:17.676650 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 01:10:17.678471 systemd-resolved[218]: Defaulting to hostname 'linux'. Jul 10 01:10:17.698662 systemd-modules-load[217]: Inserted module 'br_netfilter' Jul 10 01:10:17.732386 dracut-cmdline[232]: dracut-dracut-053 Jul 10 01:10:17.732386 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Jul 10 01:10:17.732386 dracut-cmdline[232]: BEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=6cddad5f675165861f6062277cc28875548c735477e689762fc73abc16b63a3d Jul 10 01:10:17.733391 systemd-modules-load[217]: Inserted module 'dm_multipath' Jul 10 01:10:17.732000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.733747 systemd[1]: Finished systemd-modules-load.service. Jul 10 01:10:17.736484 kernel: audit: type=1130 audit(1752109817.732:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.736607 systemd[1]: Starting systemd-sysctl.service... Jul 10 01:10:17.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.740552 systemd[1]: Finished systemd-sysctl.service. Jul 10 01:10:17.743347 kernel: audit: type=1130 audit(1752109817.739:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.762346 kernel: Loading iSCSI transport class v2.0-870. Jul 10 01:10:17.774345 kernel: iscsi: registered transport (tcp) Jul 10 01:10:17.791345 kernel: iscsi: registered transport (qla4xxx) Jul 10 01:10:17.791385 kernel: QLogic iSCSI HBA Driver Jul 10 01:10:17.811708 kernel: audit: type=1130 audit(1752109817.806:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:17.808093 systemd[1]: Finished dracut-cmdline.service. Jul 10 01:10:17.808715 systemd[1]: Starting dracut-pre-udev.service... Jul 10 01:10:17.847351 kernel: raid6: avx2x4 gen() 47904 MB/s Jul 10 01:10:17.863346 kernel: raid6: avx2x4 xor() 21380 MB/s Jul 10 01:10:17.880349 kernel: raid6: avx2x2 gen() 52583 MB/s Jul 10 01:10:17.897349 kernel: raid6: avx2x2 xor() 31568 MB/s Jul 10 01:10:17.914344 kernel: raid6: avx2x1 gen() 44219 MB/s Jul 10 01:10:17.931349 kernel: raid6: avx2x1 xor() 27210 MB/s Jul 10 01:10:17.948378 kernel: raid6: sse2x4 gen() 20897 MB/s Jul 10 01:10:17.965343 kernel: raid6: sse2x4 xor() 11748 MB/s Jul 10 01:10:17.982345 kernel: raid6: sse2x2 gen() 21197 MB/s Jul 10 01:10:17.999348 kernel: raid6: sse2x2 xor() 12943 MB/s Jul 10 01:10:18.016345 kernel: raid6: sse2x1 gen() 17840 MB/s Jul 10 01:10:18.033557 kernel: raid6: sse2x1 xor() 8840 MB/s Jul 10 01:10:18.033593 kernel: raid6: using algorithm avx2x2 gen() 52583 MB/s Jul 10 01:10:18.033601 kernel: raid6: .... xor() 31568 MB/s, rmw enabled Jul 10 01:10:18.034753 kernel: raid6: using avx2x2 recovery algorithm Jul 10 01:10:18.043342 kernel: xor: automatically using best checksumming function avx Jul 10 01:10:18.105527 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 10 01:10:18.110130 systemd[1]: Finished dracut-pre-udev.service. Jul 10 01:10:18.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:18.111000 audit: BPF prog-id=7 op=LOAD Jul 10 01:10:18.111000 audit: BPF prog-id=8 op=LOAD Jul 10 01:10:18.113346 kernel: audit: type=1130 audit(1752109818.108:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:18.112964 systemd[1]: Starting systemd-udevd.service... Jul 10 01:10:18.121261 systemd-udevd[416]: Using default interface naming scheme 'v252'. Jul 10 01:10:18.123933 systemd[1]: Started systemd-udevd.service. Jul 10 01:10:18.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:18.126251 systemd[1]: Starting dracut-pre-trigger.service... Jul 10 01:10:18.132945 dracut-pre-trigger[432]: rd.md=0: removing MD RAID activation Jul 10 01:10:18.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:18.149164 systemd[1]: Finished dracut-pre-trigger.service. Jul 10 01:10:18.149710 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 01:10:18.213000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:18.214520 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 01:10:18.272565 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jul 10 01:10:18.272597 kernel: vmw_pvscsi: using 64bit dma Jul 10 01:10:18.272606 kernel: vmw_pvscsi: max_id: 16 Jul 10 01:10:18.272613 kernel: vmw_pvscsi: setting ring_pages to 8 Jul 10 01:10:18.283662 kernel: VMware vmxnet3 virtual NIC driver - version 1.6.0.0-k-NAPI Jul 10 01:10:18.283699 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jul 10 01:10:18.285004 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jul 10 01:10:18.285338 kernel: libata version 3.00 loaded. Jul 10 01:10:18.288379 kernel: vmw_pvscsi: enabling reqCallThreshold Jul 10 01:10:18.288399 kernel: vmw_pvscsi: driver-based request coalescing enabled Jul 10 01:10:18.288407 kernel: vmw_pvscsi: using MSI-X Jul 10 01:10:18.291685 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jul 10 01:10:18.291790 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jul 10 01:10:18.294146 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jul 10 01:10:18.295341 kernel: ata_piix 0000:00:07.1: version 2.13 Jul 10 01:10:18.298061 kernel: scsi host1: ata_piix Jul 10 01:10:18.298137 kernel: scsi host2: ata_piix Jul 10 01:10:18.298195 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jul 10 01:10:18.298204 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jul 10 01:10:18.313343 kernel: cryptd: max_cpu_qlen set to 1000 Jul 10 01:10:18.317344 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jul 10 01:10:18.468373 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jul 10 01:10:18.472341 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jul 10 01:10:18.485804 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jul 10 01:10:18.515693 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 10 01:10:18.515830 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jul 10 01:10:18.515898 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jul 10 01:10:18.515986 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jul 10 01:10:18.516053 kernel: AVX2 version of gcm_enc/dec engaged. Jul 10 01:10:18.516063 kernel: AES CTR mode by8 optimization enabled Jul 10 01:10:18.516075 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 01:10:18.516082 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 10 01:10:18.533353 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jul 10 01:10:18.549545 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 10 01:10:18.549558 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 10 01:10:18.713343 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/sda6 scanned by (udev-worker) (472) Jul 10 01:10:18.713562 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 10 01:10:18.719758 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 01:10:18.721993 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 10 01:10:18.734555 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 10 01:10:18.734922 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 10 01:10:18.735718 systemd[1]: Starting disk-uuid.service... Jul 10 01:10:18.924346 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 01:10:18.943345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 01:10:20.019803 disk-uuid[551]: The operation has completed successfully. Jul 10 01:10:20.020341 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 10 01:10:20.231604 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 10 01:10:20.231664 systemd[1]: Finished disk-uuid.service. Jul 10 01:10:20.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:20.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:20.232240 systemd[1]: Starting verity-setup.service... Jul 10 01:10:20.255496 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 10 01:10:20.611085 systemd[1]: Found device dev-mapper-usr.device. Jul 10 01:10:20.611971 systemd[1]: Mounting sysusr-usr.mount... Jul 10 01:10:20.612378 systemd[1]: Finished verity-setup.service. Jul 10 01:10:20.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:20.759838 systemd[1]: Mounted sysusr-usr.mount. Jul 10 01:10:20.760342 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 10 01:10:20.760519 systemd[1]: Starting afterburn-network-kargs.service... Jul 10 01:10:20.761124 systemd[1]: Starting ignition-setup.service... Jul 10 01:10:20.810945 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 01:10:20.810997 kernel: BTRFS info (device sda6): using free space tree Jul 10 01:10:20.811011 kernel: BTRFS info (device sda6): has skinny extents Jul 10 01:10:20.831352 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 10 01:10:20.842488 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 10 01:10:20.853481 systemd[1]: Finished ignition-setup.service. Jul 10 01:10:20.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:20.854077 systemd[1]: Starting ignition-fetch-offline.service... Jul 10 01:10:20.949009 systemd[1]: Finished afterburn-network-kargs.service. Jul 10 01:10:20.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:20.949632 systemd[1]: Starting parse-ip-for-networkd.service... Jul 10 01:10:20.991408 systemd[1]: Finished parse-ip-for-networkd.service. Jul 10 01:10:20.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:20.990000 audit: BPF prog-id=9 op=LOAD Jul 10 01:10:20.992403 systemd[1]: Starting systemd-networkd.service... Jul 10 01:10:21.008110 systemd-networkd[734]: lo: Link UP Jul 10 01:10:21.008117 systemd-networkd[734]: lo: Gained carrier Jul 10 01:10:21.008428 systemd-networkd[734]: Enumeration completed Jul 10 01:10:21.008476 systemd[1]: Started systemd-networkd.service. Jul 10 01:10:21.008887 systemd[1]: Reached target network.target. Jul 10 01:10:21.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.009451 systemd[1]: Starting iscsiuio.service... Jul 10 01:10:21.013122 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 10 01:10:21.013223 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 10 01:10:21.009608 systemd-networkd[734]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jul 10 01:10:21.013377 systemd-networkd[734]: ens192: Link UP Jul 10 01:10:21.013380 systemd-networkd[734]: ens192: Gained carrier Jul 10 01:10:21.013935 systemd[1]: Started iscsiuio.service. Jul 10 01:10:21.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.014572 systemd[1]: Starting iscsid.service... Jul 10 01:10:21.016579 iscsid[739]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 10 01:10:21.016579 iscsid[739]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 10 01:10:21.016579 iscsid[739]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 10 01:10:21.016579 iscsid[739]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 10 01:10:21.016579 iscsid[739]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 10 01:10:21.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.018571 iscsid[739]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 10 01:10:21.017388 systemd[1]: Started iscsid.service. Jul 10 01:10:21.017930 systemd[1]: Starting dracut-initqueue.service... Jul 10 01:10:21.024752 systemd[1]: Finished dracut-initqueue.service. Jul 10 01:10:21.024892 systemd[1]: Reached target remote-fs-pre.target. Jul 10 01:10:21.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.024980 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 01:10:21.025068 systemd[1]: Reached target remote-fs.target. Jul 10 01:10:21.025574 systemd[1]: Starting dracut-pre-mount.service... Jul 10 01:10:21.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.031217 systemd[1]: Finished dracut-pre-mount.service. Jul 10 01:10:21.169243 ignition[606]: Ignition 2.14.0 Jul 10 01:10:21.169608 ignition[606]: Stage: fetch-offline Jul 10 01:10:21.169809 ignition[606]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 01:10:21.170022 ignition[606]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 01:10:21.177445 ignition[606]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 01:10:21.177786 ignition[606]: parsed url from cmdline: "" Jul 10 01:10:21.177844 ignition[606]: no config URL provided Jul 10 01:10:21.178001 ignition[606]: reading system config file "/usr/lib/ignition/user.ign" Jul 10 01:10:21.178193 ignition[606]: no config at "/usr/lib/ignition/user.ign" Jul 10 01:10:21.193424 ignition[606]: config successfully fetched Jul 10 01:10:21.193453 ignition[606]: parsing config with SHA512: 05dd3feb3f73f7b2918d36554210d824e61d1d83a7f985186a50c7fd5129c872f8dbca579a76c4ca3d0ee21749a24a97e87ed78cfc98bbed1aa0735b38f7038a Jul 10 01:10:21.198765 unknown[606]: fetched base config from "system" Jul 10 01:10:21.198774 unknown[606]: fetched user config from "vmware" Jul 10 01:10:21.199221 ignition[606]: fetch-offline: fetch-offline passed Jul 10 01:10:21.199268 ignition[606]: Ignition finished successfully Jul 10 01:10:21.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.199982 systemd[1]: Finished ignition-fetch-offline.service. Jul 10 01:10:21.200128 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 10 01:10:21.200596 systemd[1]: Starting ignition-kargs.service... Jul 10 01:10:21.206474 ignition[754]: Ignition 2.14.0 Jul 10 01:10:21.206728 ignition[754]: Stage: kargs Jul 10 01:10:21.206909 ignition[754]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 01:10:21.207068 ignition[754]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 01:10:21.208560 ignition[754]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 01:10:21.210034 ignition[754]: kargs: kargs passed Jul 10 01:10:21.210176 ignition[754]: Ignition finished successfully Jul 10 01:10:21.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.211050 systemd[1]: Finished ignition-kargs.service. Jul 10 01:10:21.211673 systemd[1]: Starting ignition-disks.service... Jul 10 01:10:21.216558 ignition[760]: Ignition 2.14.0 Jul 10 01:10:21.216881 ignition[760]: Stage: disks Jul 10 01:10:21.217120 ignition[760]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 01:10:21.217283 ignition[760]: parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 01:10:21.218756 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 01:10:21.220500 ignition[760]: disks: disks passed Jul 10 01:10:21.220541 ignition[760]: Ignition finished successfully Jul 10 01:10:21.221224 systemd[1]: Finished ignition-disks.service. Jul 10 01:10:21.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.221412 systemd[1]: Reached target initrd-root-device.target. Jul 10 01:10:21.221532 systemd[1]: Reached target local-fs-pre.target. Jul 10 01:10:21.221696 systemd[1]: Reached target local-fs.target. Jul 10 01:10:21.221853 systemd[1]: Reached target sysinit.target. Jul 10 01:10:21.222014 systemd[1]: Reached target basic.target. Jul 10 01:10:21.222673 systemd[1]: Starting systemd-fsck-root.service... Jul 10 01:10:21.290850 systemd-fsck[768]: ROOT: clean, 619/1628000 files, 124060/1617920 blocks Jul 10 01:10:21.294277 systemd[1]: Finished systemd-fsck-root.service. Jul 10 01:10:21.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.294860 systemd[1]: Mounting sysroot.mount... Jul 10 01:10:21.342344 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 10 01:10:21.342566 systemd[1]: Mounted sysroot.mount. Jul 10 01:10:21.342784 systemd[1]: Reached target initrd-root-fs.target. Jul 10 01:10:21.349807 systemd[1]: Mounting sysroot-usr.mount... Jul 10 01:10:21.350165 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 10 01:10:21.350188 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 10 01:10:21.350202 systemd[1]: Reached target ignition-diskful.target. Jul 10 01:10:21.351672 systemd[1]: Mounted sysroot-usr.mount. Jul 10 01:10:21.352266 systemd[1]: Starting initrd-setup-root.service... Jul 10 01:10:21.357149 initrd-setup-root[778]: cut: /sysroot/etc/passwd: No such file or directory Jul 10 01:10:21.371725 initrd-setup-root[786]: cut: /sysroot/etc/group: No such file or directory Jul 10 01:10:21.379215 initrd-setup-root[794]: cut: /sysroot/etc/shadow: No such file or directory Jul 10 01:10:21.383317 initrd-setup-root[802]: cut: /sysroot/etc/gshadow: No such file or directory Jul 10 01:10:21.517437 systemd[1]: Finished initrd-setup-root.service. Jul 10 01:10:21.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.518238 systemd[1]: Starting ignition-mount.service... Jul 10 01:10:21.518702 systemd[1]: Starting sysroot-boot.service... Jul 10 01:10:21.522069 bash[819]: umount: /sysroot/usr/share/oem: not mounted. Jul 10 01:10:21.527934 ignition[820]: INFO : Ignition 2.14.0 Jul 10 01:10:21.527934 ignition[820]: INFO : Stage: mount Jul 10 01:10:21.528299 ignition[820]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 01:10:21.528299 ignition[820]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 01:10:21.529282 ignition[820]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 01:10:21.530748 ignition[820]: INFO : mount: mount passed Jul 10 01:10:21.530748 ignition[820]: INFO : Ignition finished successfully Jul 10 01:10:21.531430 systemd[1]: Finished ignition-mount.service. Jul 10 01:10:21.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.599417 systemd[1]: Finished sysroot-boot.service. Jul 10 01:10:21.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:21.717275 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 10 01:10:21.781360 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (829) Jul 10 01:10:21.788969 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 10 01:10:21.789000 kernel: BTRFS info (device sda6): using free space tree Jul 10 01:10:21.789009 kernel: BTRFS info (device sda6): has skinny extents Jul 10 01:10:21.825353 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 10 01:10:21.833758 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 10 01:10:21.834363 systemd[1]: Starting ignition-files.service... Jul 10 01:10:21.845742 ignition[849]: INFO : Ignition 2.14.0 Jul 10 01:10:21.846080 ignition[849]: INFO : Stage: files Jul 10 01:10:21.846312 ignition[849]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 01:10:21.846511 ignition[849]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 01:10:21.848747 ignition[849]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 01:10:21.855838 ignition[849]: DEBUG : files: compiled without relabeling support, skipping Jul 10 01:10:21.859360 ignition[849]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 10 01:10:21.859550 ignition[849]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 10 01:10:21.880144 ignition[849]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 10 01:10:21.880458 ignition[849]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 10 01:10:21.887155 unknown[849]: wrote ssh authorized keys file for user: core Jul 10 01:10:21.887403 ignition[849]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 10 01:10:21.891913 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 01:10:21.892124 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 10 01:10:21.892309 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 01:10:21.892533 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 10 01:10:21.944350 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 10 01:10:22.044438 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 10 01:10:22.044438 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 10 01:10:22.044827 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 10 01:10:22.044827 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 10 01:10:22.049497 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 10 01:10:22.049670 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 01:10:22.049825 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 10 01:10:22.049825 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 01:10:22.049825 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 10 01:10:22.053538 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 01:10:22.053804 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 10 01:10:22.053993 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 01:10:22.054266 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 01:10:22.058124 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 10 01:10:22.058367 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Jul 10 01:10:22.065490 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3369268335" Jul 10 01:10:22.065735 ignition[849]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3369268335": device or resource busy Jul 10 01:10:22.065735 ignition[849]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3369268335", trying btrfs: device or resource busy Jul 10 01:10:22.065735 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3369268335" Jul 10 01:10:22.065735 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3369268335" Jul 10 01:10:22.073672 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem3369268335" Jul 10 01:10:22.073853 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem3369268335" Jul 10 01:10:22.073853 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/vmtoolsd.service" Jul 10 01:10:22.073853 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 01:10:22.073853 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 10 01:10:22.074677 systemd[1]: mnt-oem3369268335.mount: Deactivated successfully. Jul 10 01:10:22.584637 systemd-networkd[734]: ens192: Gained IPv6LL Jul 10 01:10:22.773582 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): GET result: OK Jul 10 01:10:23.132285 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 10 01:10:23.135643 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 10 01:10:23.135907 ignition[849]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jul 10 01:10:23.136101 ignition[849]: INFO : files: op(11): [started] processing unit "vmtoolsd.service" Jul 10 01:10:23.136242 ignition[849]: INFO : files: op(11): [finished] processing unit "vmtoolsd.service" Jul 10 01:10:23.136392 ignition[849]: INFO : files: op(12): [started] processing unit "containerd.service" Jul 10 01:10:23.136701 ignition[849]: INFO : files: op(12): op(13): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 01:10:23.136982 ignition[849]: INFO : files: op(12): op(13): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 10 01:10:23.137190 ignition[849]: INFO : files: op(12): [finished] processing unit "containerd.service" Jul 10 01:10:23.137340 ignition[849]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Jul 10 01:10:23.137499 ignition[849]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 01:10:23.137733 ignition[849]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 10 01:10:23.137915 ignition[849]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Jul 10 01:10:23.138056 ignition[849]: INFO : files: op(16): [started] processing unit "coreos-metadata.service" Jul 10 01:10:23.138215 ignition[849]: INFO : files: op(16): op(17): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 01:10:23.138458 ignition[849]: INFO : files: op(16): op(17): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 10 01:10:23.138645 ignition[849]: INFO : files: op(16): [finished] processing unit "coreos-metadata.service" Jul 10 01:10:23.138791 ignition[849]: INFO : files: op(18): [started] setting preset to enabled for "vmtoolsd.service" Jul 10 01:10:23.138980 ignition[849]: INFO : files: op(18): [finished] setting preset to enabled for "vmtoolsd.service" Jul 10 01:10:23.139128 ignition[849]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Jul 10 01:10:23.139294 ignition[849]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Jul 10 01:10:23.139451 ignition[849]: INFO : files: op(1a): [started] setting preset to disabled for "coreos-metadata.service" Jul 10 01:10:23.139603 ignition[849]: INFO : files: op(1a): op(1b): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 01:10:23.664924 ignition[849]: INFO : files: op(1a): op(1b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 10 01:10:23.665183 ignition[849]: INFO : files: op(1a): [finished] setting preset to disabled for "coreos-metadata.service" Jul 10 01:10:23.665461 ignition[849]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 10 01:10:23.665701 ignition[849]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 10 01:10:23.665880 ignition[849]: INFO : files: files passed Jul 10 01:10:23.666011 ignition[849]: INFO : Ignition finished successfully Jul 10 01:10:23.667443 systemd[1]: Finished ignition-files.service. Jul 10 01:10:23.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.668013 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 10 01:10:23.671075 kernel: kauditd_printk_skb: 24 callbacks suppressed Jul 10 01:10:23.671094 kernel: audit: type=1130 audit(1752109823.666:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.670718 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 10 01:10:23.671141 systemd[1]: Starting ignition-quench.service... Jul 10 01:10:23.686625 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 10 01:10:23.686678 systemd[1]: Finished ignition-quench.service. Jul 10 01:10:23.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.691618 kernel: audit: type=1130 audit(1752109823.685:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.691645 kernel: audit: type=1131 audit(1752109823.685:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.692751 initrd-setup-root-after-ignition[876]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 10 01:10:23.693251 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 10 01:10:23.695966 kernel: audit: type=1130 audit(1752109823.692:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.693424 systemd[1]: Reached target ignition-complete.target. Jul 10 01:10:23.696435 systemd[1]: Starting initrd-parse-etc.service... Jul 10 01:10:23.703000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.704461 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 10 01:10:23.709577 kernel: audit: type=1130 audit(1752109823.703:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.709593 kernel: audit: type=1131 audit(1752109823.703:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.704513 systemd[1]: Finished initrd-parse-etc.service. Jul 10 01:10:23.704681 systemd[1]: Reached target initrd-fs.target. Jul 10 01:10:23.709471 systemd[1]: Reached target initrd.target. Jul 10 01:10:23.709651 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 10 01:10:23.710110 systemd[1]: Starting dracut-pre-pivot.service... Jul 10 01:10:23.716867 systemd[1]: Finished dracut-pre-pivot.service. Jul 10 01:10:23.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.717477 systemd[1]: Starting initrd-cleanup.service... Jul 10 01:10:23.720350 kernel: audit: type=1130 audit(1752109823.715:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.724228 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 10 01:10:23.724276 systemd[1]: Finished initrd-cleanup.service. Jul 10 01:10:23.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.724856 systemd[1]: Stopped target nss-lookup.target. Jul 10 01:10:23.729422 kernel: audit: type=1130 audit(1752109823.723:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.729437 kernel: audit: type=1131 audit(1752109823.723:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.729316 systemd[1]: Stopped target remote-cryptsetup.target. Jul 10 01:10:23.729497 systemd[1]: Stopped target timers.target. Jul 10 01:10:23.729696 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 10 01:10:23.732269 kernel: audit: type=1131 audit(1752109823.728:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.729730 systemd[1]: Stopped dracut-pre-pivot.service. Jul 10 01:10:23.729876 systemd[1]: Stopped target initrd.target. Jul 10 01:10:23.732340 systemd[1]: Stopped target basic.target. Jul 10 01:10:23.732486 systemd[1]: Stopped target ignition-complete.target. Jul 10 01:10:23.732644 systemd[1]: Stopped target ignition-diskful.target. Jul 10 01:10:23.732804 systemd[1]: Stopped target initrd-root-device.target. Jul 10 01:10:23.732973 systemd[1]: Stopped target remote-fs.target. Jul 10 01:10:23.733130 systemd[1]: Stopped target remote-fs-pre.target. Jul 10 01:10:23.733309 systemd[1]: Stopped target sysinit.target. Jul 10 01:10:23.733462 systemd[1]: Stopped target local-fs.target. Jul 10 01:10:23.733619 systemd[1]: Stopped target local-fs-pre.target. Jul 10 01:10:23.733771 systemd[1]: Stopped target swap.target. Jul 10 01:10:23.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.733926 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 10 01:10:23.733950 systemd[1]: Stopped dracut-pre-mount.service. Jul 10 01:10:23.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.734114 systemd[1]: Stopped target cryptsetup.target. Jul 10 01:10:23.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.734244 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 10 01:10:23.734266 systemd[1]: Stopped dracut-initqueue.service. Jul 10 01:10:23.734449 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 10 01:10:23.734470 systemd[1]: Stopped ignition-fetch-offline.service. Jul 10 01:10:23.734600 systemd[1]: Stopped target paths.target. Jul 10 01:10:23.734740 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 10 01:10:23.736381 systemd[1]: Stopped systemd-ask-password-console.path. Jul 10 01:10:23.736490 systemd[1]: Stopped target slices.target. Jul 10 01:10:23.736649 systemd[1]: Stopped target sockets.target. Jul 10 01:10:23.736812 systemd[1]: iscsid.socket: Deactivated successfully. Jul 10 01:10:23.736827 systemd[1]: Closed iscsid.socket. Jul 10 01:10:23.736963 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 10 01:10:23.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.736977 systemd[1]: Closed iscsiuio.socket. Jul 10 01:10:23.737127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 10 01:10:23.737149 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 10 01:10:23.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.737431 systemd[1]: ignition-files.service: Deactivated successfully. Jul 10 01:10:23.737451 systemd[1]: Stopped ignition-files.service. Jul 10 01:10:23.737987 systemd[1]: Stopping ignition-mount.service... Jul 10 01:10:23.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.738138 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 10 01:10:23.738167 systemd[1]: Stopped kmod-static-nodes.service. Jul 10 01:10:23.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.738676 systemd[1]: Stopping sysroot-boot.service... Jul 10 01:10:23.738783 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 10 01:10:23.738815 systemd[1]: Stopped systemd-udev-trigger.service. Jul 10 01:10:23.738942 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 10 01:10:23.738962 systemd[1]: Stopped dracut-pre-trigger.service. Jul 10 01:10:23.746146 ignition[889]: INFO : Ignition 2.14.0 Jul 10 01:10:23.746146 ignition[889]: INFO : Stage: umount Jul 10 01:10:23.746564 ignition[889]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 10 01:10:23.746564 ignition[889]: DEBUG : parsing config with SHA512: bd85a898f7da4744ff98e02742aa4854e1ceea8026a4e95cb6fb599b39b54cff0db353847df13d3c55ae196a9dc5d648977228d55e5da3ea20cd600fa7cec8ed Jul 10 01:10:23.747496 ignition[889]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jul 10 01:10:23.748806 ignition[889]: INFO : umount: umount passed Jul 10 01:10:23.749256 ignition[889]: INFO : Ignition finished successfully Jul 10 01:10:23.749321 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 10 01:10:23.749388 systemd[1]: Stopped ignition-mount.service. Jul 10 01:10:23.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.749634 systemd[1]: Stopped target network.target. Jul 10 01:10:23.749738 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 10 01:10:23.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.749762 systemd[1]: Stopped ignition-disks.service. Jul 10 01:10:23.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.749918 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 10 01:10:23.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.749938 systemd[1]: Stopped ignition-kargs.service. Jul 10 01:10:23.750091 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 10 01:10:23.750111 systemd[1]: Stopped ignition-setup.service. Jul 10 01:10:23.750304 systemd[1]: Stopping systemd-networkd.service... Jul 10 01:10:23.750643 systemd[1]: Stopping systemd-resolved.service... Jul 10 01:10:23.755101 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 10 01:10:23.755158 systemd[1]: Stopped systemd-networkd.service. Jul 10 01:10:23.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.755412 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 10 01:10:23.755429 systemd[1]: Closed systemd-networkd.socket. Jul 10 01:10:23.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=afterburn-network-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.758000 audit: BPF prog-id=9 op=UNLOAD Jul 10 01:10:23.756130 systemd[1]: Stopping network-cleanup.service... Jul 10 01:10:23.756223 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 10 01:10:23.756251 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 10 01:10:23.756386 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jul 10 01:10:23.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.756407 systemd[1]: Stopped afterburn-network-kargs.service. Jul 10 01:10:23.756515 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 10 01:10:23.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.756537 systemd[1]: Stopped systemd-sysctl.service. Jul 10 01:10:23.756691 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 10 01:10:23.756711 systemd[1]: Stopped systemd-modules-load.service. Jul 10 01:10:23.756848 systemd[1]: Stopping systemd-udevd.service... Jul 10 01:10:23.758146 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 10 01:10:23.760944 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 10 01:10:23.760997 systemd[1]: Stopped systemd-resolved.service. Jul 10 01:10:23.761546 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 10 01:10:23.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.761590 systemd[1]: Stopped network-cleanup.service. Jul 10 01:10:23.763148 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 10 01:10:23.763209 systemd[1]: Stopped systemd-udevd.service. Jul 10 01:10:23.763574 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 10 01:10:23.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.763595 systemd[1]: Closed systemd-udevd-control.socket. Jul 10 01:10:23.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.763749 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 10 01:10:23.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.763765 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 10 01:10:23.763909 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 10 01:10:23.763931 systemd[1]: Stopped dracut-pre-udev.service. Jul 10 01:10:23.764082 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 10 01:10:23.764101 systemd[1]: Stopped dracut-cmdline.service. Jul 10 01:10:23.764000 audit: BPF prog-id=6 op=UNLOAD Jul 10 01:10:23.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.764254 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 10 01:10:23.764273 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 10 01:10:23.764806 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 10 01:10:23.765264 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 10 01:10:23.765301 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 10 01:10:23.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.767000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:23.768962 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 10 01:10:23.769009 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 10 01:10:23.785804 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 10 01:10:24.002386 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 10 01:10:24.002462 systemd[1]: Stopped sysroot-boot.service. Jul 10 01:10:24.002823 systemd[1]: Reached target initrd-switch-root.target. Jul 10 01:10:24.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:24.002967 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 10 01:10:24.002999 systemd[1]: Stopped initrd-setup-root.service. Jul 10 01:10:24.003730 systemd[1]: Starting initrd-switch-root.service... Jul 10 01:10:24.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:24.059496 systemd[1]: Switching root. Jul 10 01:10:24.060000 audit: BPF prog-id=5 op=UNLOAD Jul 10 01:10:24.060000 audit: BPF prog-id=4 op=UNLOAD Jul 10 01:10:24.060000 audit: BPF prog-id=3 op=UNLOAD Jul 10 01:10:24.062000 audit: BPF prog-id=8 op=UNLOAD Jul 10 01:10:24.062000 audit: BPF prog-id=7 op=UNLOAD Jul 10 01:10:24.079327 iscsid[739]: iscsid shutting down. Jul 10 01:10:24.079589 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Jul 10 01:10:24.079634 systemd-journald[216]: Journal stopped Jul 10 01:10:28.095172 kernel: SELinux: Class mctp_socket not defined in policy. Jul 10 01:10:28.095201 kernel: SELinux: Class anon_inode not defined in policy. Jul 10 01:10:28.095213 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 10 01:10:28.095222 kernel: SELinux: policy capability network_peer_controls=1 Jul 10 01:10:28.095231 kernel: SELinux: policy capability open_perms=1 Jul 10 01:10:28.095240 kernel: SELinux: policy capability extended_socket_class=1 Jul 10 01:10:28.095251 kernel: SELinux: policy capability always_check_network=0 Jul 10 01:10:28.095257 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 10 01:10:28.095263 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 10 01:10:28.095270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 10 01:10:28.095276 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 10 01:10:28.095283 systemd[1]: Successfully loaded SELinux policy in 71.741ms. Jul 10 01:10:28.095292 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.509ms. Jul 10 01:10:28.095300 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 10 01:10:28.095307 systemd[1]: Detected virtualization vmware. Jul 10 01:10:28.095314 systemd[1]: Detected architecture x86-64. Jul 10 01:10:28.095321 systemd[1]: Detected first boot. Jul 10 01:10:28.095339 systemd[1]: Initializing machine ID from random generator. Jul 10 01:10:28.095349 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 10 01:10:28.095355 systemd[1]: Populated /etc with preset unit settings. Jul 10 01:10:28.095362 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 01:10:28.095369 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 01:10:28.095378 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 01:10:28.095385 systemd[1]: Queued start job for default target multi-user.target. Jul 10 01:10:28.095394 systemd[1]: Unnecessary job was removed for dev-sda6.device. Jul 10 01:10:28.095401 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 10 01:10:28.095408 systemd[1]: Created slice system-addon\x2drun.slice. Jul 10 01:10:28.095414 systemd[1]: Created slice system-getty.slice. Jul 10 01:10:28.095421 systemd[1]: Created slice system-modprobe.slice. Jul 10 01:10:28.095427 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 10 01:10:28.095434 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 10 01:10:28.095442 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 10 01:10:28.095458 systemd[1]: Created slice user.slice. Jul 10 01:10:28.095469 systemd[1]: Started systemd-ask-password-console.path. Jul 10 01:10:28.095476 systemd[1]: Started systemd-ask-password-wall.path. Jul 10 01:10:28.095483 systemd[1]: Set up automount boot.automount. Jul 10 01:10:28.095490 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 10 01:10:28.095497 systemd[1]: Reached target integritysetup.target. Jul 10 01:10:28.095504 systemd[1]: Reached target remote-cryptsetup.target. Jul 10 01:10:28.095510 systemd[1]: Reached target remote-fs.target. Jul 10 01:10:28.095520 systemd[1]: Reached target slices.target. Jul 10 01:10:28.095527 systemd[1]: Reached target swap.target. Jul 10 01:10:28.095534 systemd[1]: Reached target torcx.target. Jul 10 01:10:28.095541 systemd[1]: Reached target veritysetup.target. Jul 10 01:10:28.095548 systemd[1]: Listening on systemd-coredump.socket. Jul 10 01:10:28.095555 systemd[1]: Listening on systemd-initctl.socket. Jul 10 01:10:28.095562 systemd[1]: Listening on systemd-journald-audit.socket. Jul 10 01:10:28.095569 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 10 01:10:28.095577 systemd[1]: Listening on systemd-journald.socket. Jul 10 01:10:28.095584 systemd[1]: Listening on systemd-networkd.socket. Jul 10 01:10:28.095590 systemd[1]: Listening on systemd-udevd-control.socket. Jul 10 01:10:28.095597 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 10 01:10:28.095604 systemd[1]: Listening on systemd-userdbd.socket. Jul 10 01:10:28.095613 systemd[1]: Mounting dev-hugepages.mount... Jul 10 01:10:28.095620 systemd[1]: Mounting dev-mqueue.mount... Jul 10 01:10:28.095627 systemd[1]: Mounting media.mount... Jul 10 01:10:28.095634 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:28.095641 systemd[1]: Mounting sys-kernel-debug.mount... Jul 10 01:10:28.095648 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 10 01:10:28.095655 systemd[1]: Mounting tmp.mount... Jul 10 01:10:28.095662 systemd[1]: Starting flatcar-tmpfiles.service... Jul 10 01:10:28.095669 systemd[1]: Starting ignition-delete-config.service... Jul 10 01:10:28.095677 systemd[1]: Starting kmod-static-nodes.service... Jul 10 01:10:28.095684 systemd[1]: Starting modprobe@configfs.service... Jul 10 01:10:28.095691 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 01:10:28.095698 systemd[1]: Starting modprobe@drm.service... Jul 10 01:10:28.095705 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 01:10:28.095712 systemd[1]: Starting modprobe@fuse.service... Jul 10 01:10:28.095719 systemd[1]: Starting modprobe@loop.service... Jul 10 01:10:28.095726 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 10 01:10:28.095733 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 10 01:10:28.095741 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jul 10 01:10:28.095748 systemd[1]: Starting systemd-journald.service... Jul 10 01:10:28.095755 systemd[1]: Starting systemd-modules-load.service... Jul 10 01:10:28.095764 systemd[1]: Starting systemd-network-generator.service... Jul 10 01:10:28.095775 systemd[1]: Starting systemd-remount-fs.service... Jul 10 01:10:28.095786 systemd[1]: Starting systemd-udev-trigger.service... Jul 10 01:10:28.095795 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:28.095803 systemd[1]: Mounted dev-hugepages.mount. Jul 10 01:10:28.095812 systemd[1]: Mounted dev-mqueue.mount. Jul 10 01:10:28.095819 systemd[1]: Mounted media.mount. Jul 10 01:10:28.095826 systemd[1]: Mounted sys-kernel-debug.mount. Jul 10 01:10:28.095833 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 10 01:10:28.095840 systemd[1]: Mounted tmp.mount. Jul 10 01:10:28.095847 systemd[1]: Finished kmod-static-nodes.service. Jul 10 01:10:28.095854 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 10 01:10:28.095861 systemd[1]: Finished modprobe@configfs.service. Jul 10 01:10:28.095868 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 01:10:28.095876 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 01:10:28.095883 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 01:10:28.095890 systemd[1]: Finished modprobe@drm.service. Jul 10 01:10:28.095897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 01:10:28.095904 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 01:10:28.095911 systemd[1]: Finished systemd-network-generator.service. Jul 10 01:10:28.095918 systemd[1]: Finished systemd-remount-fs.service. Jul 10 01:10:28.095925 systemd[1]: Reached target network-pre.target. Jul 10 01:10:28.095932 systemd[1]: Mounting sys-kernel-config.mount... Jul 10 01:10:28.095940 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 10 01:10:28.095947 kernel: fuse: init (API version 7.34) Jul 10 01:10:28.095954 systemd[1]: Starting systemd-hwdb-update.service... Jul 10 01:10:28.095961 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 01:10:28.095968 systemd[1]: Starting systemd-random-seed.service... Jul 10 01:10:28.095975 systemd[1]: Finished flatcar-tmpfiles.service. Jul 10 01:10:28.095981 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 10 01:10:28.095992 systemd-journald[1050]: Journal started Jul 10 01:10:28.096027 systemd-journald[1050]: Runtime Journal (/run/log/journal/fb908cdca8cd4155bc4e19c49445f246) is 4.8M, max 38.8M, 34.0M free. Jul 10 01:10:28.096055 systemd[1]: Finished modprobe@fuse.service. Jul 10 01:10:27.973000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 10 01:10:27.973000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jul 10 01:10:28.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.089000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 10 01:10:28.089000 audit[1050]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffb7589540 a2=4000 a3=7fffb75895dc items=0 ppid=1 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:10:28.099574 systemd[1]: Started systemd-journald.service. Jul 10 01:10:28.089000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 10 01:10:28.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.096000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.098754 systemd[1]: Finished systemd-modules-load.service. Jul 10 01:10:28.099248 systemd[1]: Mounted sys-kernel-config.mount. Jul 10 01:10:28.100169 jq[1025]: true Jul 10 01:10:28.100175 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 10 01:10:28.101146 systemd[1]: Starting systemd-journal-flush.service... Jul 10 01:10:28.102929 jq[1066]: true Jul 10 01:10:28.104423 kernel: loop: module loaded Jul 10 01:10:28.104294 systemd[1]: Starting systemd-sysctl.service... Jul 10 01:10:28.105281 systemd[1]: Starting systemd-sysusers.service... Jul 10 01:10:28.106294 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 01:10:28.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.110278 systemd[1]: Finished modprobe@loop.service. Jul 10 01:10:28.113775 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 10 01:10:28.114028 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 01:10:28.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.116229 systemd[1]: Finished systemd-random-seed.service. Jul 10 01:10:28.116389 systemd[1]: Reached target first-boot-complete.target. Jul 10 01:10:28.120855 systemd-journald[1050]: Time spent on flushing to /var/log/journal/fb908cdca8cd4155bc4e19c49445f246 is 23.804ms for 1934 entries. Jul 10 01:10:28.120855 systemd-journald[1050]: System Journal (/var/log/journal/fb908cdca8cd4155bc4e19c49445f246) is 8.0M, max 584.8M, 576.8M free. Jul 10 01:10:28.164592 systemd-journald[1050]: Received client request to flush runtime journal. Jul 10 01:10:28.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.163931 systemd[1]: Finished systemd-sysctl.service. Jul 10 01:10:28.165013 systemd[1]: Finished systemd-journal-flush.service. Jul 10 01:10:28.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.199419 systemd[1]: Finished systemd-udev-trigger.service. Jul 10 01:10:28.200484 systemd[1]: Starting systemd-udev-settle.service... Jul 10 01:10:28.215171 udevadm[1101]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 10 01:10:28.330764 systemd[1]: Finished systemd-sysusers.service. Jul 10 01:10:28.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.332108 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 10 01:10:28.539630 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 10 01:10:28.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.767316 ignition[1084]: Ignition 2.14.0 Jul 10 01:10:28.767613 ignition[1084]: deleting config from guestinfo properties Jul 10 01:10:28.836797 ignition[1084]: Successfully deleted config Jul 10 01:10:28.838352 systemd[1]: Finished ignition-delete-config.service. Jul 10 01:10:28.843460 kernel: kauditd_printk_skb: 75 callbacks suppressed Jul 10 01:10:28.843536 kernel: audit: type=1130 audit(1752109828.837:111): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ignition-delete-config comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.851564 systemd[1]: Finished systemd-hwdb-update.service. Jul 10 01:10:28.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.852802 systemd[1]: Starting systemd-udevd.service... Jul 10 01:10:28.855346 kernel: audit: type=1130 audit(1752109828.850:112): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:28.867209 systemd-udevd[1113]: Using default interface naming scheme 'v252'. Jul 10 01:10:29.067345 systemd[1]: Started systemd-udevd.service. Jul 10 01:10:29.072896 kernel: audit: type=1130 audit(1752109829.066:113): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.068604 systemd[1]: Starting systemd-networkd.service... Jul 10 01:10:29.087293 systemd[1]: Found device dev-ttyS0.device. Jul 10 01:10:29.096566 systemd[1]: Starting systemd-userdbd.service... Jul 10 01:10:29.125343 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 10 01:10:29.130018 kernel: ACPI: button: Power Button [PWRF] Jul 10 01:10:29.130058 systemd[1]: Started systemd-userdbd.service. Jul 10 01:10:29.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.133338 kernel: audit: type=1130 audit(1752109829.128:114): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.212000 audit[1126]: AVC avc: denied { confidentiality } for pid=1126 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 10 01:10:29.221374 kernel: audit: type=1400 audit(1752109829.212:115): avc: denied { confidentiality } for pid=1126 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 10 01:10:29.222343 kernel: vmw_vmci 0000:00:07.7: Found VMCI PCI device at 0x11080, irq 16 Jul 10 01:10:29.225668 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jul 10 01:10:29.225754 kernel: Guest personality initialized and is active Jul 10 01:10:29.225767 kernel: audit: type=1300 audit(1752109829.212:115): arch=c000003e syscall=175 success=yes exit=0 a0=55902cbf3440 a1=338ac a2=7f871ccc5bc5 a3=5 items=110 ppid=1113 pid=1126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:10:29.212000 audit[1126]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55902cbf3440 a1=338ac a2=7f871ccc5bc5 a3=5 items=110 ppid=1113 pid=1126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:10:29.212000 audit: CWD cwd="/" Jul 10 01:10:29.229905 kernel: audit: type=1307 audit(1752109829.212:115): cwd="/" Jul 10 01:10:29.212000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.233756 kernel: audit: type=1302 audit(1752109829.212:115): item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.233791 kernel: audit: type=1302 audit(1752109829.212:115): item=1 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=1 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=2 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.237577 kernel: audit: type=1302 audit(1752109829.212:115): item=2 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=3 name=(null) inode=25141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=4 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=5 name=(null) inode=25142 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=6 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=7 name=(null) inode=25143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=8 name=(null) inode=25143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=9 name=(null) inode=25144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=10 name=(null) inode=25143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=11 name=(null) inode=25145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=12 name=(null) inode=25143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=13 name=(null) inode=25146 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=14 name=(null) inode=25143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=15 name=(null) inode=25147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=16 name=(null) inode=25143 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=17 name=(null) inode=25148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=18 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=19 name=(null) inode=25149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=20 name=(null) inode=25149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=21 name=(null) inode=25150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=22 name=(null) inode=25149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=23 name=(null) inode=25151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=24 name=(null) inode=25149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=25 name=(null) inode=25152 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=26 name=(null) inode=25149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=27 name=(null) inode=25153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=28 name=(null) inode=25149 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=29 name=(null) inode=25154 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=30 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=31 name=(null) inode=25155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=32 name=(null) inode=25155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=33 name=(null) inode=25156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=34 name=(null) inode=25155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=35 name=(null) inode=25157 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=36 name=(null) inode=25155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=37 name=(null) inode=25158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=38 name=(null) inode=25155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=39 name=(null) inode=25159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=40 name=(null) inode=25155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=41 name=(null) inode=25160 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=42 name=(null) inode=25140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=43 name=(null) inode=25161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=44 name=(null) inode=25161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=45 name=(null) inode=25162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=46 name=(null) inode=25161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=47 name=(null) inode=25163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=48 name=(null) inode=25161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=49 name=(null) inode=25164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=50 name=(null) inode=25161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=51 name=(null) inode=25165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=52 name=(null) inode=25161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=53 name=(null) inode=25166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=54 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=55 name=(null) inode=25167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=56 name=(null) inode=25167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=57 name=(null) inode=25168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=58 name=(null) inode=25167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=59 name=(null) inode=25169 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=60 name=(null) inode=25167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=61 name=(null) inode=25170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=62 name=(null) inode=25170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=63 name=(null) inode=25171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=64 name=(null) inode=25170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=65 name=(null) inode=25172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=66 name=(null) inode=25170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=67 name=(null) inode=25173 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=68 name=(null) inode=25170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=69 name=(null) inode=25174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=70 name=(null) inode=25170 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=71 name=(null) inode=25175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=72 name=(null) inode=25167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=73 name=(null) inode=25176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=74 name=(null) inode=25176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=75 name=(null) inode=25177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=76 name=(null) inode=25176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=77 name=(null) inode=25178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=78 name=(null) inode=25176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=79 name=(null) inode=25179 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=80 name=(null) inode=25176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=81 name=(null) inode=25180 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=82 name=(null) inode=25176 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=83 name=(null) inode=25181 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=84 name=(null) inode=25167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=85 name=(null) inode=25182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=86 name=(null) inode=25182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=87 name=(null) inode=25183 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=88 name=(null) inode=25182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=89 name=(null) inode=25184 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=90 name=(null) inode=25182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=91 name=(null) inode=25185 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=92 name=(null) inode=25182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=93 name=(null) inode=25186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=94 name=(null) inode=25182 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=95 name=(null) inode=25187 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=96 name=(null) inode=25167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=97 name=(null) inode=25188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=98 name=(null) inode=25188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=99 name=(null) inode=25189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=100 name=(null) inode=25188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=101 name=(null) inode=25190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=102 name=(null) inode=25188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=103 name=(null) inode=25191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=104 name=(null) inode=25188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=105 name=(null) inode=25192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=106 name=(null) inode=25188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=107 name=(null) inode=25193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PATH item=109 name=(null) inode=25194 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:10:29.212000 audit: PROCTITLE proctitle="(udev-worker)" Jul 10 01:10:29.243344 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jul 10 01:10:29.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.247826 systemd-networkd[1114]: lo: Link UP Jul 10 01:10:29.247831 systemd-networkd[1114]: lo: Gained carrier Jul 10 01:10:29.248128 systemd-networkd[1114]: Enumeration completed Jul 10 01:10:29.248208 systemd[1]: Started systemd-networkd.service. Jul 10 01:10:29.248686 systemd-networkd[1114]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jul 10 01:10:29.250444 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 10 01:10:29.250478 kernel: Initialized host personality Jul 10 01:10:29.255288 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jul 10 01:10:29.255464 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jul 10 01:10:29.252404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 10 01:10:29.256802 systemd-networkd[1114]: ens192: Link UP Jul 10 01:10:29.257001 systemd-networkd[1114]: ens192: Gained carrier Jul 10 01:10:29.257349 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): ens192: link becomes ready Jul 10 01:10:29.263814 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 10 01:10:29.284353 kernel: mousedev: PS/2 mouse device common for all mice Jul 10 01:10:29.284478 (udev-worker)[1127]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jul 10 01:10:29.297588 systemd[1]: Finished systemd-udev-settle.service. Jul 10 01:10:29.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.298656 systemd[1]: Starting lvm2-activation-early.service... Jul 10 01:10:29.361981 lvm[1147]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 01:10:29.388972 systemd[1]: Finished lvm2-activation-early.service. Jul 10 01:10:29.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.389153 systemd[1]: Reached target cryptsetup.target. Jul 10 01:10:29.390165 systemd[1]: Starting lvm2-activation.service... Jul 10 01:10:29.393109 lvm[1149]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 10 01:10:29.420008 systemd[1]: Finished lvm2-activation.service. Jul 10 01:10:29.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.420186 systemd[1]: Reached target local-fs-pre.target. Jul 10 01:10:29.420283 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 10 01:10:29.420297 systemd[1]: Reached target local-fs.target. Jul 10 01:10:29.420393 systemd[1]: Reached target machines.target. Jul 10 01:10:29.421427 systemd[1]: Starting ldconfig.service... Jul 10 01:10:29.424048 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 01:10:29.424078 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 01:10:29.424933 systemd[1]: Starting systemd-boot-update.service... Jul 10 01:10:29.425745 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 10 01:10:29.426778 systemd[1]: Starting systemd-machine-id-commit.service... Jul 10 01:10:29.427734 systemd[1]: Starting systemd-sysext.service... Jul 10 01:10:29.436187 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1152 (bootctl) Jul 10 01:10:29.437025 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 10 01:10:29.448594 systemd[1]: Unmounting usr-share-oem.mount... Jul 10 01:10:29.450951 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 10 01:10:29.451082 systemd[1]: Unmounted usr-share-oem.mount. Jul 10 01:10:29.467950 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 10 01:10:29.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:29.473351 kernel: loop0: detected capacity change from 0 to 221472 Jul 10 01:10:30.088619 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 10 01:10:30.089033 systemd[1]: Finished systemd-machine-id-commit.service. Jul 10 01:10:30.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.142349 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 10 01:10:30.201346 kernel: loop1: detected capacity change from 0 to 221472 Jul 10 01:10:30.252318 systemd-fsck[1165]: fsck.fat 4.2 (2021-01-31) Jul 10 01:10:30.252318 systemd-fsck[1165]: /dev/sda1: 790 files, 120731/258078 clusters Jul 10 01:10:30.253409 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 10 01:10:30.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.254552 systemd[1]: Mounting boot.mount... Jul 10 01:10:30.305806 systemd[1]: Mounted boot.mount. Jul 10 01:10:30.308688 (sd-sysext)[1170]: Using extensions 'kubernetes'. Jul 10 01:10:30.308948 (sd-sysext)[1170]: Merged extensions into '/usr'. Jul 10 01:10:30.322301 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.323538 systemd[1]: Mounting usr-share-oem.mount... Jul 10 01:10:30.324451 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 01:10:30.325207 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 01:10:30.326040 systemd[1]: Starting modprobe@loop.service... Jul 10 01:10:30.326201 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.326299 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 01:10:30.326400 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.327049 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 01:10:30.327153 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 01:10:30.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.328895 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 01:10:30.328978 systemd[1]: Finished modprobe@loop.service. Jul 10 01:10:30.327000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.329231 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.330570 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 01:10:30.330659 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 01:10:30.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.330928 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 01:10:30.337089 systemd[1]: Finished systemd-boot-update.service. Jul 10 01:10:30.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.337370 systemd[1]: Mounted usr-share-oem.mount. Jul 10 01:10:30.338109 systemd[1]: Finished systemd-sysext.service. Jul 10 01:10:30.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.339200 systemd[1]: Starting ensure-sysext.service... Jul 10 01:10:30.340134 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 10 01:10:30.347464 systemd[1]: Reloading. Jul 10 01:10:30.352707 systemd-tmpfiles[1189]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 01:10:30.356720 systemd-tmpfiles[1189]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 01:10:30.358534 systemd-tmpfiles[1189]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 01:10:30.386345 /usr/lib/systemd/system-generators/torcx-generator[1208]: time="2025-07-10T01:10:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 01:10:30.386558 /usr/lib/systemd/system-generators/torcx-generator[1208]: time="2025-07-10T01:10:30Z" level=info msg="torcx already run" Jul 10 01:10:30.462992 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 01:10:30.463146 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 01:10:30.481729 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 01:10:30.520503 systemd-networkd[1114]: ens192: Gained IPv6LL Jul 10 01:10:30.530379 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.531416 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 01:10:30.532344 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 01:10:30.533167 systemd[1]: Starting modprobe@loop.service... Jul 10 01:10:30.533363 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.533442 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 01:10:30.533769 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.534247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 01:10:30.534432 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 01:10:30.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.534981 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 01:10:30.535060 systemd[1]: Finished modprobe@loop.service. Jul 10 01:10:30.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.535564 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.536499 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.537623 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 01:10:30.539659 systemd[1]: Starting modprobe@loop.service... Jul 10 01:10:30.540603 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.540682 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 01:10:30.540752 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.541255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 01:10:30.541369 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 01:10:30.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.541960 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 01:10:30.542089 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 01:10:30.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.542735 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 01:10:30.542944 systemd[1]: Finished modprobe@loop.service. Jul 10 01:10:30.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.543292 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 01:10:30.543584 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.547151 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.550822 systemd[1]: Starting modprobe@dm_mod.service... Jul 10 01:10:30.551672 systemd[1]: Starting modprobe@drm.service... Jul 10 01:10:30.552531 systemd[1]: Starting modprobe@efi_pstore.service... Jul 10 01:10:30.553288 systemd[1]: Starting modprobe@loop.service... Jul 10 01:10:30.553511 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.553650 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 01:10:30.554748 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 10 01:10:30.554963 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 10 01:10:30.555779 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 10 01:10:30.555873 systemd[1]: Finished modprobe@dm_mod.service. Jul 10 01:10:30.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.554000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.556283 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 10 01:10:30.556507 systemd[1]: Finished modprobe@drm.service. Jul 10 01:10:30.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.557841 systemd[1]: Finished ensure-sysext.service. Jul 10 01:10:30.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.559742 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 10 01:10:30.559823 systemd[1]: Finished modprobe@efi_pstore.service. Jul 10 01:10:30.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.560136 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 10 01:10:30.560324 systemd[1]: Finished modprobe@loop.service. Jul 10 01:10:30.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.560536 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 10 01:10:30.560556 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 10 01:10:30.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:30.573820 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 10 01:10:31.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:31.341748 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 10 01:10:31.343118 systemd[1]: Starting audit-rules.service... Jul 10 01:10:31.344342 systemd[1]: Starting clean-ca-certificates.service... Jul 10 01:10:31.345631 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 10 01:10:31.347382 systemd[1]: Starting systemd-resolved.service... Jul 10 01:10:31.348567 systemd[1]: Starting systemd-timesyncd.service... Jul 10 01:10:31.349726 systemd[1]: Starting systemd-update-utmp.service... Jul 10 01:10:31.350143 systemd[1]: Finished clean-ca-certificates.service. Jul 10 01:10:31.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:31.350488 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 10 01:10:31.362000 audit[1306]: SYSTEM_BOOT pid=1306 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 10 01:10:31.364735 systemd[1]: Finished systemd-update-utmp.service. Jul 10 01:10:31.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:31.430990 systemd[1]: Started systemd-timesyncd.service. Jul 10 01:10:31.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:31.431197 systemd[1]: Reached target time-set.target. Jul 10 01:10:31.440090 systemd-resolved[1303]: Positive Trust Anchors: Jul 10 01:10:31.440100 systemd-resolved[1303]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 10 01:10:31.440120 systemd-resolved[1303]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 10 01:10:31.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:10:31.525694 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 10 01:11:57.945950 systemd-timesyncd[1305]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Jul 10 01:11:57.946387 systemd-timesyncd[1305]: Initial clock synchronization to Thu 2025-07-10 01:11:57.945793 UTC. Jul 10 01:11:57.957000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 10 01:11:57.957000 audit[1323]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe10f91dc0 a2=420 a3=0 items=0 ppid=1300 pid=1323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:11:57.957000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 10 01:11:57.959724 augenrules[1323]: No rules Jul 10 01:11:57.960384 systemd[1]: Finished audit-rules.service. Jul 10 01:11:58.012893 systemd-resolved[1303]: Defaulting to hostname 'linux'. Jul 10 01:11:58.014113 systemd[1]: Started systemd-resolved.service. Jul 10 01:11:58.014263 systemd[1]: Reached target network.target. Jul 10 01:11:58.014348 systemd[1]: Reached target network-online.target. Jul 10 01:11:58.014433 systemd[1]: Reached target nss-lookup.target. Jul 10 01:11:58.653521 ldconfig[1151]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 10 01:11:58.664316 systemd[1]: Finished ldconfig.service. Jul 10 01:11:58.665522 systemd[1]: Starting systemd-update-done.service... Jul 10 01:11:58.674157 systemd[1]: Finished systemd-update-done.service. Jul 10 01:11:58.674351 systemd[1]: Reached target sysinit.target. Jul 10 01:11:58.674500 systemd[1]: Started motdgen.path. Jul 10 01:11:58.674603 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 10 01:11:58.674938 systemd[1]: Started logrotate.timer. Jul 10 01:11:58.675079 systemd[1]: Started mdadm.timer. Jul 10 01:11:58.675168 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 10 01:11:58.675271 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 10 01:11:58.675292 systemd[1]: Reached target paths.target. Jul 10 01:11:58.675382 systemd[1]: Reached target timers.target. Jul 10 01:11:58.675658 systemd[1]: Listening on dbus.socket. Jul 10 01:11:58.676956 systemd[1]: Starting docker.socket... Jul 10 01:11:58.682281 systemd[1]: Listening on sshd.socket. Jul 10 01:11:58.682446 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 01:11:58.682760 systemd[1]: Listening on docker.socket. Jul 10 01:11:58.682873 systemd[1]: Reached target sockets.target. Jul 10 01:11:58.682967 systemd[1]: Reached target basic.target. Jul 10 01:11:58.683132 systemd[1]: System is tainted: cgroupsv1 Jul 10 01:11:58.683160 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 01:11:58.683174 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 10 01:11:58.684145 systemd[1]: Starting containerd.service... Jul 10 01:11:58.685198 systemd[1]: Starting dbus.service... Jul 10 01:11:58.686128 systemd[1]: Starting enable-oem-cloudinit.service... Jul 10 01:11:58.687025 systemd[1]: Starting extend-filesystems.service... Jul 10 01:11:58.687167 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 10 01:11:58.689572 jq[1338]: false Jul 10 01:11:58.695254 systemd[1]: Starting kubelet.service... Jul 10 01:11:58.696543 systemd[1]: Starting motdgen.service... Jul 10 01:11:58.699975 systemd[1]: Starting prepare-helm.service... Jul 10 01:11:58.701802 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 10 01:11:58.703490 systemd[1]: Starting sshd-keygen.service... Jul 10 01:11:58.708876 systemd[1]: Starting systemd-logind.service... Jul 10 01:11:58.709035 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 10 01:11:58.709088 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 10 01:11:58.710332 systemd[1]: Starting update-engine.service... Jul 10 01:11:58.711519 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 10 01:11:58.717433 systemd[1]: Starting vmtoolsd.service... Jul 10 01:11:58.718783 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 10 01:11:58.720344 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 10 01:11:58.722073 jq[1354]: true Jul 10 01:11:58.721271 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 10 01:11:58.722215 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 10 01:11:58.722829 extend-filesystems[1339]: Found loop1 Jul 10 01:11:58.723097 extend-filesystems[1339]: Found sda Jul 10 01:11:58.723244 extend-filesystems[1339]: Found sda1 Jul 10 01:11:58.723699 extend-filesystems[1339]: Found sda2 Jul 10 01:11:58.723844 extend-filesystems[1339]: Found sda3 Jul 10 01:11:58.736900 extend-filesystems[1339]: Found usr Jul 10 01:11:58.737107 extend-filesystems[1339]: Found sda4 Jul 10 01:11:58.738119 extend-filesystems[1339]: Found sda6 Jul 10 01:11:58.738415 extend-filesystems[1339]: Found sda7 Jul 10 01:11:58.738575 extend-filesystems[1339]: Found sda9 Jul 10 01:11:58.738741 extend-filesystems[1339]: Checking size of /dev/sda9 Jul 10 01:11:58.740344 systemd[1]: Started vmtoolsd.service. Jul 10 01:11:58.746988 jq[1362]: true Jul 10 01:11:58.755635 tar[1360]: linux-amd64/helm Jul 10 01:11:58.764974 systemd[1]: motdgen.service: Deactivated successfully. Jul 10 01:11:58.765127 systemd[1]: Finished motdgen.service. Jul 10 01:11:58.783308 env[1363]: time="2025-07-10T01:11:58.783277276Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 10 01:11:58.805421 env[1363]: time="2025-07-10T01:11:58.805393895Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 10 01:11:58.806273 env[1363]: time="2025-07-10T01:11:58.806261521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 10 01:11:58.809992 env[1363]: time="2025-07-10T01:11:58.809949068Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.186-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 10 01:11:58.810596 env[1363]: time="2025-07-10T01:11:58.810577673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 10 01:11:58.810938 env[1363]: time="2025-07-10T01:11:58.810919545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 01:11:58.811010 env[1363]: time="2025-07-10T01:11:58.810996257Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 10 01:11:58.811081 env[1363]: time="2025-07-10T01:11:58.811065723Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 10 01:11:58.811138 env[1363]: time="2025-07-10T01:11:58.811124733Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 10 01:11:58.811264 env[1363]: time="2025-07-10T01:11:58.811250638Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 10 01:11:58.814519 extend-filesystems[1339]: Old size kept for /dev/sda9 Jul 10 01:11:58.814828 extend-filesystems[1339]: Found sr0 Jul 10 01:11:58.815446 env[1363]: time="2025-07-10T01:11:58.815423238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 10 01:11:58.816360 env[1363]: time="2025-07-10T01:11:58.816339077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 10 01:11:58.816434 env[1363]: time="2025-07-10T01:11:58.816420862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 10 01:11:58.816537 env[1363]: time="2025-07-10T01:11:58.816522713Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 10 01:11:58.816605 env[1363]: time="2025-07-10T01:11:58.816591926Z" level=info msg="metadata content store policy set" policy=shared Jul 10 01:11:58.817835 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 10 01:11:58.817992 systemd[1]: Finished extend-filesystems.service. Jul 10 01:11:58.832945 bash[1396]: Updated "/home/core/.ssh/authorized_keys" Jul 10 01:11:58.833611 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 10 01:11:58.834657 systemd-logind[1351]: Watching system buttons on /dev/input/event1 (Power Button) Jul 10 01:11:58.834676 systemd-logind[1351]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 10 01:11:58.837343 systemd-logind[1351]: New seat seat0. Jul 10 01:11:58.838959 env[1363]: time="2025-07-10T01:11:58.838921113Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 10 01:11:58.839093 env[1363]: time="2025-07-10T01:11:58.839077475Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 10 01:11:58.840777 env[1363]: time="2025-07-10T01:11:58.840754362Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 10 01:11:58.840983 env[1363]: time="2025-07-10T01:11:58.840917638Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841060 env[1363]: time="2025-07-10T01:11:58.841046305Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841143 env[1363]: time="2025-07-10T01:11:58.841129845Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841215 env[1363]: time="2025-07-10T01:11:58.841202387Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841305 env[1363]: time="2025-07-10T01:11:58.841291983Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841387 env[1363]: time="2025-07-10T01:11:58.841373852Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841458 env[1363]: time="2025-07-10T01:11:58.841446629Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841532 env[1363]: time="2025-07-10T01:11:58.841520671Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.841603 env[1363]: time="2025-07-10T01:11:58.841582943Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 10 01:11:58.841793 env[1363]: time="2025-07-10T01:11:58.841781190Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 10 01:11:58.841951 env[1363]: time="2025-07-10T01:11:58.841937389Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 10 01:11:58.843018 env[1363]: time="2025-07-10T01:11:58.842275775Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 10 01:11:58.843109 env[1363]: time="2025-07-10T01:11:58.843096465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.843168 env[1363]: time="2025-07-10T01:11:58.843158221Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 10 01:11:58.843251 env[1363]: time="2025-07-10T01:11:58.843241798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.843345 env[1363]: time="2025-07-10T01:11:58.843335292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.843393 env[1363]: time="2025-07-10T01:11:58.843382820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.845661 env[1363]: time="2025-07-10T01:11:58.845615089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.845996 env[1363]: time="2025-07-10T01:11:58.845984782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.846068 env[1363]: time="2025-07-10T01:11:58.846058635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.846125 env[1363]: time="2025-07-10T01:11:58.846109749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.846153 dbus-daemon[1336]: [system] SELinux support is enabled Jul 10 01:11:58.846275 systemd[1]: Started dbus.service. Jul 10 01:11:58.846368 env[1363]: time="2025-07-10T01:11:58.846357851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.846436 env[1363]: time="2025-07-10T01:11:58.846427319Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 10 01:11:58.846582 env[1363]: time="2025-07-10T01:11:58.846565742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.846634 env[1363]: time="2025-07-10T01:11:58.846623742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.846695 env[1363]: time="2025-07-10T01:11:58.846686210Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.846742 env[1363]: time="2025-07-10T01:11:58.846731829Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 10 01:11:58.846805 env[1363]: time="2025-07-10T01:11:58.846793539Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 10 01:11:58.846858 env[1363]: time="2025-07-10T01:11:58.846843113Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 10 01:11:58.846912 env[1363]: time="2025-07-10T01:11:58.846901875Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 10 01:11:58.846983 env[1363]: time="2025-07-10T01:11:58.846974205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 10 01:11:58.847179 env[1363]: time="2025-07-10T01:11:58.847149879Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.847916594Z" level=info msg="Connect containerd service" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.847943274Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848323994Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848396759Z" level=info msg="Start subscribing containerd event" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848424936Z" level=info msg="Start recovering state" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848464028Z" level=info msg="Start event monitor" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848476061Z" level=info msg="Start snapshots syncer" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848480985Z" level=info msg="Start cni network conf syncer for default" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848484909Z" level=info msg="Start streaming server" Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848767498Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 10 01:11:58.849694 env[1363]: time="2025-07-10T01:11:58.848820440Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 10 01:11:58.847660 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 10 01:11:58.847675 systemd[1]: Reached target system-config.target. Jul 10 01:11:58.847794 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 10 01:11:58.847805 systemd[1]: Reached target user-config.target. Jul 10 01:11:58.848904 systemd[1]: Started containerd.service. Jul 10 01:11:58.852241 systemd[1]: Started systemd-logind.service. Jul 10 01:11:58.863970 env[1363]: time="2025-07-10T01:11:58.848858292Z" level=info msg="containerd successfully booted in 0.067352s" Jul 10 01:11:58.865652 kernel: NET: Registered PF_VSOCK protocol family Jul 10 01:11:58.893137 update_engine[1353]: I0710 01:11:58.891382 1353 main.cc:92] Flatcar Update Engine starting Jul 10 01:11:58.899225 systemd[1]: Started update-engine.service. Jul 10 01:11:58.900958 systemd[1]: Started locksmithd.service. Jul 10 01:11:58.902308 update_engine[1353]: I0710 01:11:58.902288 1353 update_check_scheduler.cc:74] Next update check in 7m16s Jul 10 01:11:59.204751 tar[1360]: linux-amd64/LICENSE Jul 10 01:11:59.204850 tar[1360]: linux-amd64/README.md Jul 10 01:11:59.210746 systemd[1]: Finished prepare-helm.service. Jul 10 01:11:59.387497 locksmithd[1428]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 10 01:11:59.973760 sshd_keygen[1380]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 10 01:11:59.987185 systemd[1]: Finished sshd-keygen.service. Jul 10 01:11:59.988447 systemd[1]: Starting issuegen.service... Jul 10 01:11:59.993133 systemd[1]: issuegen.service: Deactivated successfully. Jul 10 01:11:59.993259 systemd[1]: Finished issuegen.service. Jul 10 01:11:59.994533 systemd[1]: Starting systemd-user-sessions.service... Jul 10 01:12:00.000504 systemd[1]: Finished systemd-user-sessions.service. Jul 10 01:12:00.001507 systemd[1]: Started getty@tty1.service. Jul 10 01:12:00.002356 systemd[1]: Started serial-getty@ttyS0.service. Jul 10 01:12:00.002597 systemd[1]: Reached target getty.target. Jul 10 01:12:01.430036 systemd[1]: Started kubelet.service. Jul 10 01:12:01.430423 systemd[1]: Reached target multi-user.target. Jul 10 01:12:01.432120 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 10 01:12:01.437621 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 10 01:12:01.437791 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 10 01:12:01.437979 systemd[1]: Startup finished in 8.121s (kernel) + 10.270s (userspace) = 18.391s. Jul 10 01:12:01.519189 login[1498]: pam_lastlog(login:session): file /var/log/lastlog is locked/read Jul 10 01:12:01.522080 login[1499]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 01:12:01.547543 systemd[1]: Created slice user-500.slice. Jul 10 01:12:01.548341 systemd[1]: Starting user-runtime-dir@500.service... Jul 10 01:12:01.550935 systemd-logind[1351]: New session 1 of user core. Jul 10 01:12:01.557552 systemd[1]: Finished user-runtime-dir@500.service. Jul 10 01:12:01.558380 systemd[1]: Starting user@500.service... Jul 10 01:12:01.562676 (systemd)[1511]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:01.630420 systemd[1511]: Queued start job for default target default.target. Jul 10 01:12:01.630625 systemd[1511]: Reached target paths.target. Jul 10 01:12:01.630648 systemd[1511]: Reached target sockets.target. Jul 10 01:12:01.630665 systemd[1511]: Reached target timers.target. Jul 10 01:12:01.630703 systemd[1511]: Reached target basic.target. Jul 10 01:12:01.630742 systemd[1511]: Reached target default.target. Jul 10 01:12:01.630760 systemd[1511]: Startup finished in 63ms. Jul 10 01:12:01.630794 systemd[1]: Started user@500.service. Jul 10 01:12:01.631485 systemd[1]: Started session-1.scope. Jul 10 01:12:02.519518 login[1498]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 10 01:12:02.522770 systemd-logind[1351]: New session 2 of user core. Jul 10 01:12:02.523115 systemd[1]: Started session-2.scope. Jul 10 01:12:02.659562 kubelet[1505]: E0710 01:12:02.659530 1505 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 01:12:02.661298 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 01:12:02.661443 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 01:12:12.912060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 10 01:12:12.912178 systemd[1]: Stopped kubelet.service. Jul 10 01:12:12.913273 systemd[1]: Starting kubelet.service... Jul 10 01:12:13.341373 systemd[1]: Started kubelet.service. Jul 10 01:12:13.989118 kubelet[1547]: E0710 01:12:13.989082 1547 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 01:12:13.992212 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 01:12:13.992311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 01:12:24.017534 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 10 01:12:24.017677 systemd[1]: Stopped kubelet.service. Jul 10 01:12:24.018744 systemd[1]: Starting kubelet.service... Jul 10 01:12:24.297381 systemd[1]: Started kubelet.service. Jul 10 01:12:24.353514 kubelet[1562]: E0710 01:12:24.353474 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 01:12:24.354764 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 01:12:24.354864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 01:12:28.977795 systemd[1]: Created slice system-sshd.slice. Jul 10 01:12:28.978556 systemd[1]: Started sshd@0-139.178.70.102:22-139.178.68.195:56270.service. Jul 10 01:12:29.034995 sshd[1569]: Accepted publickey for core from 139.178.68.195 port 56270 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:12:29.035931 sshd[1569]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:29.039062 systemd[1]: Started session-3.scope. Jul 10 01:12:29.039264 systemd-logind[1351]: New session 3 of user core. Jul 10 01:12:29.086435 systemd[1]: Started sshd@1-139.178.70.102:22-139.178.68.195:56274.service. Jul 10 01:12:29.130500 sshd[1574]: Accepted publickey for core from 139.178.68.195 port 56274 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:12:29.131432 sshd[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:29.134296 systemd[1]: Started session-4.scope. Jul 10 01:12:29.134543 systemd-logind[1351]: New session 4 of user core. Jul 10 01:12:29.184674 sshd[1574]: pam_unix(sshd:session): session closed for user core Jul 10 01:12:29.185534 systemd[1]: Started sshd@2-139.178.70.102:22-139.178.68.195:56282.service. Jul 10 01:12:29.187460 systemd-logind[1351]: Session 4 logged out. Waiting for processes to exit. Jul 10 01:12:29.187507 systemd[1]: sshd@1-139.178.70.102:22-139.178.68.195:56274.service: Deactivated successfully. Jul 10 01:12:29.187944 systemd[1]: session-4.scope: Deactivated successfully. Jul 10 01:12:29.188219 systemd-logind[1351]: Removed session 4. Jul 10 01:12:29.224238 sshd[1579]: Accepted publickey for core from 139.178.68.195 port 56282 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:12:29.225263 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:29.227866 systemd-logind[1351]: New session 5 of user core. Jul 10 01:12:29.228191 systemd[1]: Started session-5.scope. Jul 10 01:12:29.276212 sshd[1579]: pam_unix(sshd:session): session closed for user core Jul 10 01:12:29.277232 systemd[1]: Started sshd@3-139.178.70.102:22-139.178.68.195:56290.service. Jul 10 01:12:29.280907 systemd[1]: sshd@2-139.178.70.102:22-139.178.68.195:56282.service: Deactivated successfully. Jul 10 01:12:29.281316 systemd[1]: session-5.scope: Deactivated successfully. Jul 10 01:12:29.282205 systemd-logind[1351]: Session 5 logged out. Waiting for processes to exit. Jul 10 01:12:29.282904 systemd-logind[1351]: Removed session 5. Jul 10 01:12:29.315097 sshd[1586]: Accepted publickey for core from 139.178.68.195 port 56290 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:12:29.316136 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:29.318885 systemd[1]: Started session-6.scope. Jul 10 01:12:29.319591 systemd-logind[1351]: New session 6 of user core. Jul 10 01:12:29.369310 sshd[1586]: pam_unix(sshd:session): session closed for user core Jul 10 01:12:29.371086 systemd[1]: Started sshd@4-139.178.70.102:22-139.178.68.195:56302.service. Jul 10 01:12:29.371799 systemd[1]: sshd@3-139.178.70.102:22-139.178.68.195:56290.service: Deactivated successfully. Jul 10 01:12:29.372428 systemd-logind[1351]: Session 6 logged out. Waiting for processes to exit. Jul 10 01:12:29.372471 systemd[1]: session-6.scope: Deactivated successfully. Jul 10 01:12:29.377756 systemd-logind[1351]: Removed session 6. Jul 10 01:12:29.410536 sshd[1593]: Accepted publickey for core from 139.178.68.195 port 56302 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:12:29.411428 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:29.414656 systemd[1]: Started session-7.scope. Jul 10 01:12:29.414836 systemd-logind[1351]: New session 7 of user core. Jul 10 01:12:29.501894 sudo[1599]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 10 01:12:29.502447 sudo[1599]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 01:12:29.511715 dbus-daemon[1336]: \xd0=\xf0\u000d: received setenforce notice (enforcing=2044420624) Jul 10 01:12:29.511858 sudo[1599]: pam_unix(sudo:session): session closed for user root Jul 10 01:12:29.523232 sshd[1593]: pam_unix(sshd:session): session closed for user core Jul 10 01:12:29.524251 systemd[1]: Started sshd@5-139.178.70.102:22-139.178.68.195:56306.service. Jul 10 01:12:29.525965 systemd[1]: sshd@4-139.178.70.102:22-139.178.68.195:56302.service: Deactivated successfully. Jul 10 01:12:29.526812 systemd[1]: session-7.scope: Deactivated successfully. Jul 10 01:12:29.527230 systemd-logind[1351]: Session 7 logged out. Waiting for processes to exit. Jul 10 01:12:29.527990 systemd-logind[1351]: Removed session 7. Jul 10 01:12:29.562108 sshd[1601]: Accepted publickey for core from 139.178.68.195 port 56306 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:12:29.563150 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:29.565913 systemd[1]: Started session-8.scope. Jul 10 01:12:29.566226 systemd-logind[1351]: New session 8 of user core. Jul 10 01:12:29.616678 sudo[1608]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 10 01:12:29.616825 sudo[1608]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 01:12:29.618764 sudo[1608]: pam_unix(sudo:session): session closed for user root Jul 10 01:12:29.621729 sudo[1607]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 10 01:12:29.622069 sudo[1607]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 01:12:29.627831 systemd[1]: Stopping audit-rules.service... Jul 10 01:12:29.627000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 10 01:12:29.629949 kernel: kauditd_printk_skb: 152 callbacks suppressed Jul 10 01:12:29.629984 kernel: audit: type=1305 audit(1752109949.627:158): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jul 10 01:12:29.627000 audit[1611]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9bb2bd60 a2=420 a3=0 items=0 ppid=1 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.635422 kernel: audit: type=1300 audit(1752109949.627:158): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff9bb2bd60 a2=420 a3=0 items=0 ppid=1 pid=1611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.635455 kernel: audit: type=1327 audit(1752109949.627:158): proctitle=2F7362696E2F617564697463746C002D44 Jul 10 01:12:29.627000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jul 10 01:12:29.635496 auditctl[1611]: No rules Jul 10 01:12:29.637363 kernel: audit: type=1131 audit(1752109949.634:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.635809 systemd[1]: audit-rules.service: Deactivated successfully. Jul 10 01:12:29.635944 systemd[1]: Stopped audit-rules.service. Jul 10 01:12:29.637228 systemd[1]: Starting audit-rules.service... Jul 10 01:12:29.651569 augenrules[1629]: No rules Jul 10 01:12:29.652120 systemd[1]: Finished audit-rules.service. Jul 10 01:12:29.652834 sudo[1607]: pam_unix(sudo:session): session closed for user root Jul 10 01:12:29.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.655878 sshd[1601]: pam_unix(sshd:session): session closed for user core Jul 10 01:12:29.651000 audit[1607]: USER_END pid=1607 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.658892 kernel: audit: type=1130 audit(1752109949.650:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.658929 kernel: audit: type=1106 audit(1752109949.651:161): pid=1607 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.660831 systemd[1]: Started sshd@6-139.178.70.102:22-139.178.68.195:56310.service. Jul 10 01:12:29.651000 audit[1607]: CRED_DISP pid=1607 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-139.178.68.195:56310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.666538 kernel: audit: type=1104 audit(1752109949.651:162): pid=1607 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.666574 kernel: audit: type=1130 audit(1752109949.659:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-139.178.68.195:56310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.667811 systemd[1]: sshd@5-139.178.70.102:22-139.178.68.195:56306.service: Deactivated successfully. Jul 10 01:12:29.665000 audit[1601]: USER_END pid=1601 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.665000 audit[1601]: CRED_DISP pid=1601 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.672085 systemd[1]: session-8.scope: Deactivated successfully. Jul 10 01:12:29.672149 systemd-logind[1351]: Session 8 logged out. Waiting for processes to exit. Jul 10 01:12:29.675755 kernel: audit: type=1106 audit(1752109949.665:164): pid=1601 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.675803 kernel: audit: type=1104 audit(1752109949.665:165): pid=1601 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-139.178.70.102:22-139.178.68.195:56306 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.676856 systemd-logind[1351]: Removed session 8. Jul 10 01:12:29.705000 audit[1634]: USER_ACCT pid=1634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.707572 sshd[1634]: Accepted publickey for core from 139.178.68.195 port 56310 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:12:29.706000 audit[1634]: CRED_ACQ pid=1634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.706000 audit[1634]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe2e78cc30 a2=3 a3=0 items=0 ppid=1 pid=1634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.706000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:12:29.708736 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:12:29.712118 systemd[1]: Started session-9.scope. Jul 10 01:12:29.712672 systemd-logind[1351]: New session 9 of user core. Jul 10 01:12:29.714000 audit[1634]: USER_START pid=1634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.715000 audit[1639]: CRED_ACQ pid=1639 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:12:29.759000 audit[1640]: USER_ACCT pid=1640 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.759000 audit[1640]: CRED_REFR pid=1640 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.761360 sudo[1640]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 10 01:12:29.762227 sudo[1640]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 10 01:12:29.761000 audit[1640]: USER_START pid=1640 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:12:29.781421 systemd[1]: Starting docker.service... Jul 10 01:12:29.806317 env[1650]: time="2025-07-10T01:12:29.806284977Z" level=info msg="Starting up" Jul 10 01:12:29.807177 env[1650]: time="2025-07-10T01:12:29.807163005Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 01:12:29.807177 env[1650]: time="2025-07-10T01:12:29.807174550Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 01:12:29.807233 env[1650]: time="2025-07-10T01:12:29.807187151Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 01:12:29.807233 env[1650]: time="2025-07-10T01:12:29.807195748Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 01:12:29.808238 env[1650]: time="2025-07-10T01:12:29.808224151Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 10 01:12:29.808238 env[1650]: time="2025-07-10T01:12:29.808234766Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 10 01:12:29.808287 env[1650]: time="2025-07-10T01:12:29.808242289Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 10 01:12:29.808287 env[1650]: time="2025-07-10T01:12:29.808248227Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 10 01:12:29.852524 env[1650]: time="2025-07-10T01:12:29.852500468Z" level=warning msg="Your kernel does not support cgroup blkio weight" Jul 10 01:12:29.852524 env[1650]: time="2025-07-10T01:12:29.852517756Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Jul 10 01:12:29.852670 env[1650]: time="2025-07-10T01:12:29.852631352Z" level=info msg="Loading containers: start." Jul 10 01:12:29.921000 audit[1681]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.921000 audit[1681]: SYSCALL arch=c000003e syscall=46 success=yes exit=116 a0=3 a1=7ffd3c4775a0 a2=0 a3=7ffd3c47758c items=0 ppid=1650 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.921000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jul 10 01:12:29.922000 audit[1683]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.922000 audit[1683]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffd42a47810 a2=0 a3=7ffd42a477fc items=0 ppid=1650 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.922000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jul 10 01:12:29.923000 audit[1685]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.923000 audit[1685]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff2a706800 a2=0 a3=7fff2a7067ec items=0 ppid=1650 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.923000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 10 01:12:29.924000 audit[1687]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.924000 audit[1687]: SYSCALL arch=c000003e syscall=46 success=yes exit=112 a0=3 a1=7fff4b6b31d0 a2=0 a3=7fff4b6b31bc items=0 ppid=1650 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.924000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 10 01:12:29.928000 audit[1689]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1689 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.928000 audit[1689]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc1f2ce250 a2=0 a3=7ffc1f2ce23c items=0 ppid=1650 pid=1689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.928000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jul 10 01:12:29.945000 audit[1694]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.945000 audit[1694]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffd8f59f710 a2=0 a3=7ffd8f59f6fc items=0 ppid=1650 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.945000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jul 10 01:12:29.955000 audit[1696]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.955000 audit[1696]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe0332bc70 a2=0 a3=7ffe0332bc5c items=0 ppid=1650 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.955000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jul 10 01:12:29.956000 audit[1698]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1698 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.956000 audit[1698]: SYSCALL arch=c000003e syscall=46 success=yes exit=212 a0=3 a1=7fffb1f75f40 a2=0 a3=7fffb1f75f2c items=0 ppid=1650 pid=1698 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.956000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jul 10 01:12:29.957000 audit[1700]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.957000 audit[1700]: SYSCALL arch=c000003e syscall=46 success=yes exit=308 a0=3 a1=7ffd07aabbb0 a2=0 a3=7ffd07aabb9c items=0 ppid=1650 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.957000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 01:12:29.964000 audit[1704]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1704 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.964000 audit[1704]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffdf7b3e000 a2=0 a3=7ffdf7b3dfec items=0 ppid=1650 pid=1704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.964000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 10 01:12:29.968000 audit[1705]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:29.968000 audit[1705]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffeb21c5470 a2=0 a3=7ffeb21c545c items=0 ppid=1650 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:29.968000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 01:12:29.989660 kernel: Initializing XFRM netlink socket Jul 10 01:12:30.044774 env[1650]: time="2025-07-10T01:12:30.044755192Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 10 01:12:30.071000 audit[1713]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1713 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.071000 audit[1713]: SYSCALL arch=c000003e syscall=46 success=yes exit=492 a0=3 a1=7ffd80d0b6f0 a2=0 a3=7ffd80d0b6dc items=0 ppid=1650 pid=1713 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.071000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jul 10 01:12:30.084000 audit[1716]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1716 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.084000 audit[1716]: SYSCALL arch=c000003e syscall=46 success=yes exit=288 a0=3 a1=7ffc4dd9f5c0 a2=0 a3=7ffc4dd9f5ac items=0 ppid=1650 pid=1716 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.084000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jul 10 01:12:30.086000 audit[1719]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1719 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.086000 audit[1719]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffdb90ffd30 a2=0 a3=7ffdb90ffd1c items=0 ppid=1650 pid=1719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.086000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jul 10 01:12:30.087000 audit[1721]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1721 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.087000 audit[1721]: SYSCALL arch=c000003e syscall=46 success=yes exit=376 a0=3 a1=7ffe44fd7a80 a2=0 a3=7ffe44fd7a6c items=0 ppid=1650 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.087000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jul 10 01:12:30.088000 audit[1723]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1723 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.088000 audit[1723]: SYSCALL arch=c000003e syscall=46 success=yes exit=356 a0=3 a1=7ffdae7966c0 a2=0 a3=7ffdae7966ac items=0 ppid=1650 pid=1723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.088000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jul 10 01:12:30.090000 audit[1725]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.090000 audit[1725]: SYSCALL arch=c000003e syscall=46 success=yes exit=444 a0=3 a1=7ffd8199b5a0 a2=0 a3=7ffd8199b58c items=0 ppid=1650 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.090000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jul 10 01:12:30.091000 audit[1727]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1727 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.091000 audit[1727]: SYSCALL arch=c000003e syscall=46 success=yes exit=304 a0=3 a1=7ffc870e3c00 a2=0 a3=7ffc870e3bec items=0 ppid=1650 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.091000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jul 10 01:12:30.105000 audit[1730]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1730 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.105000 audit[1730]: SYSCALL arch=c000003e syscall=46 success=yes exit=508 a0=3 a1=7ffd99725eb0 a2=0 a3=7ffd99725e9c items=0 ppid=1650 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.105000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jul 10 01:12:30.107000 audit[1732]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1732 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.107000 audit[1732]: SYSCALL arch=c000003e syscall=46 success=yes exit=240 a0=3 a1=7ffce8f43540 a2=0 a3=7ffce8f4352c items=0 ppid=1650 pid=1732 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.107000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jul 10 01:12:30.108000 audit[1734]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1734 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.108000 audit[1734]: SYSCALL arch=c000003e syscall=46 success=yes exit=428 a0=3 a1=7ffd3272d3c0 a2=0 a3=7ffd3272d3ac items=0 ppid=1650 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.108000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jul 10 01:12:30.109000 audit[1736]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1736 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.109000 audit[1736]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffa2e4a710 a2=0 a3=7fffa2e4a6fc items=0 ppid=1650 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.109000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jul 10 01:12:30.111417 systemd-networkd[1114]: docker0: Link UP Jul 10 01:12:30.116000 audit[1740]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1740 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.116000 audit[1740]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffff1c96030 a2=0 a3=7ffff1c9601c items=0 ppid=1650 pid=1740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.116000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jul 10 01:12:30.120000 audit[1741]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:30.120000 audit[1741]: SYSCALL arch=c000003e syscall=46 success=yes exit=224 a0=3 a1=7ffd21d02600 a2=0 a3=7ffd21d025ec items=0 ppid=1650 pid=1741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:30.120000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jul 10 01:12:30.122458 env[1650]: time="2025-07-10T01:12:30.122441823Z" level=info msg="Loading containers: done." Jul 10 01:12:30.138684 env[1650]: time="2025-07-10T01:12:30.138651219Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 10 01:12:30.138791 env[1650]: time="2025-07-10T01:12:30.138777466Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 10 01:12:30.138853 env[1650]: time="2025-07-10T01:12:30.138836376Z" level=info msg="Daemon has completed initialization" Jul 10 01:12:30.161054 systemd[1]: Started docker.service. Jul 10 01:12:30.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:30.165975 env[1650]: time="2025-07-10T01:12:30.165937643Z" level=info msg="API listen on /run/docker.sock" Jul 10 01:12:31.391873 env[1363]: time="2025-07-10T01:12:31.391844569Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 10 01:12:32.021720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2203351770.mount: Deactivated successfully. Jul 10 01:12:33.270143 env[1363]: time="2025-07-10T01:12:33.270089235Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:33.270831 env[1363]: time="2025-07-10T01:12:33.270816205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:33.275679 env[1363]: time="2025-07-10T01:12:33.275665628Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:33.277830 env[1363]: time="2025-07-10T01:12:33.277816135Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:33.278129 env[1363]: time="2025-07-10T01:12:33.278114891Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jul 10 01:12:33.278551 env[1363]: time="2025-07-10T01:12:33.278536269Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 10 01:12:34.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:34.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:34.517471 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 10 01:12:34.517589 systemd[1]: Stopped kubelet.service. Jul 10 01:12:34.518676 systemd[1]: Starting kubelet.service... Jul 10 01:12:34.773937 systemd[1]: Started kubelet.service. Jul 10 01:12:34.777481 kernel: kauditd_printk_skb: 86 callbacks suppressed Jul 10 01:12:34.777541 kernel: audit: type=1130 audit(1752109954.772:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:34.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:34.792735 env[1363]: time="2025-07-10T01:12:34.792711573Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:34.813932 kubelet[1780]: E0710 01:12:34.813900 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 10 01:12:34.814938 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 10 01:12:34.815028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 10 01:12:34.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 01:12:34.818649 kernel: audit: type=1131 audit(1752109954.813:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 01:12:35.215749 env[1363]: time="2025-07-10T01:12:35.214873778Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:35.243974 env[1363]: time="2025-07-10T01:12:35.243943108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:35.266727 env[1363]: time="2025-07-10T01:12:35.266691349Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:35.267356 env[1363]: time="2025-07-10T01:12:35.267332583Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jul 10 01:12:35.268247 env[1363]: time="2025-07-10T01:12:35.268227851Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 10 01:12:36.729122 env[1363]: time="2025-07-10T01:12:36.729088505Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:36.729942 env[1363]: time="2025-07-10T01:12:36.729925309Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:36.730944 env[1363]: time="2025-07-10T01:12:36.730930852Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:36.732612 env[1363]: time="2025-07-10T01:12:36.732599061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:36.733002 env[1363]: time="2025-07-10T01:12:36.732988192Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jul 10 01:12:36.733318 env[1363]: time="2025-07-10T01:12:36.733306338Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 10 01:12:38.466509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1507631737.mount: Deactivated successfully. Jul 10 01:12:39.109183 env[1363]: time="2025-07-10T01:12:39.109135607Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:39.131838 env[1363]: time="2025-07-10T01:12:39.131815892Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:39.134984 env[1363]: time="2025-07-10T01:12:39.134965049Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:39.137071 env[1363]: time="2025-07-10T01:12:39.137053948Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:39.137322 env[1363]: time="2025-07-10T01:12:39.137304402Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 10 01:12:39.137913 env[1363]: time="2025-07-10T01:12:39.137898251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 10 01:12:39.642447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3560510146.mount: Deactivated successfully. Jul 10 01:12:40.635404 env[1363]: time="2025-07-10T01:12:40.635358196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:40.636425 env[1363]: time="2025-07-10T01:12:40.636408211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:40.639847 env[1363]: time="2025-07-10T01:12:40.639825245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:40.641651 env[1363]: time="2025-07-10T01:12:40.641612512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:40.642108 env[1363]: time="2025-07-10T01:12:40.642087292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 10 01:12:40.643071 env[1363]: time="2025-07-10T01:12:40.643050935Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 10 01:12:41.144148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3010472253.mount: Deactivated successfully. Jul 10 01:12:41.151001 env[1363]: time="2025-07-10T01:12:41.150970497Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:41.153195 env[1363]: time="2025-07-10T01:12:41.153177642Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:41.155484 env[1363]: time="2025-07-10T01:12:41.155469044Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:41.157368 env[1363]: time="2025-07-10T01:12:41.157351958Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:41.157767 env[1363]: time="2025-07-10T01:12:41.157752566Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 10 01:12:41.158491 env[1363]: time="2025-07-10T01:12:41.158472165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 10 01:12:41.768430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3242420927.mount: Deactivated successfully. Jul 10 01:12:43.935952 env[1363]: time="2025-07-10T01:12:43.935910247Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:43.967337 env[1363]: time="2025-07-10T01:12:43.967304315Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:43.992955 env[1363]: time="2025-07-10T01:12:43.992926805Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:44.021554 env[1363]: time="2025-07-10T01:12:44.021502278Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:44.021837 env[1363]: time="2025-07-10T01:12:44.021814026Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 10 01:12:44.061163 update_engine[1353]: I0710 01:12:44.060911 1353 update_attempter.cc:509] Updating boot flags... Jul 10 01:12:44.824691 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 10 01:12:44.824814 systemd[1]: Stopped kubelet.service. Jul 10 01:12:44.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:44.826239 systemd[1]: Starting kubelet.service... Jul 10 01:12:44.831323 kernel: audit: type=1130 audit(1752109964.823:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:44.831383 kernel: audit: type=1131 audit(1752109964.823:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:44.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:46.193105 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 01:12:46.193163 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 01:12:46.193334 systemd[1]: Stopped kubelet.service. Jul 10 01:12:46.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 01:12:46.195079 systemd[1]: Starting kubelet.service... Jul 10 01:12:46.196649 kernel: audit: type=1130 audit(1752109966.191:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 01:12:46.211590 systemd[1]: Reloading. Jul 10 01:12:46.267299 /usr/lib/systemd/system-generators/torcx-generator[1851]: time="2025-07-10T01:12:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 01:12:46.267318 /usr/lib/systemd/system-generators/torcx-generator[1851]: time="2025-07-10T01:12:46Z" level=info msg="torcx already run" Jul 10 01:12:46.313232 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 01:12:46.313245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 01:12:46.324819 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 01:12:46.456567 kernel: audit: type=1130 audit(1752109966.447:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 01:12:46.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jul 10 01:12:46.449138 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 10 01:12:46.449215 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 10 01:12:46.449428 systemd[1]: Stopped kubelet.service. Jul 10 01:12:46.451462 systemd[1]: Starting kubelet.service... Jul 10 01:12:48.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:48.538975 systemd[1]: Started kubelet.service. Jul 10 01:12:48.542648 kernel: audit: type=1130 audit(1752109968.537:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:48.627213 kubelet[1926]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 01:12:48.627213 kubelet[1926]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 01:12:48.627213 kubelet[1926]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 01:12:48.627583 kubelet[1926]: I0710 01:12:48.627257 1926 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 01:12:48.881488 kubelet[1926]: I0710 01:12:48.880628 1926 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 01:12:48.881488 kubelet[1926]: I0710 01:12:48.880655 1926 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 01:12:48.881488 kubelet[1926]: I0710 01:12:48.880800 1926 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 01:12:49.115626 kubelet[1926]: E0710 01:12:49.115593 1926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:49.117346 kubelet[1926]: I0710 01:12:49.116496 1926 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 01:12:49.138857 kubelet[1926]: E0710 01:12:49.138474 1926 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 01:12:49.138857 kubelet[1926]: I0710 01:12:49.138520 1926 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 01:12:49.144046 kubelet[1926]: I0710 01:12:49.144009 1926 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 01:12:49.147121 kubelet[1926]: I0710 01:12:49.147100 1926 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 01:12:49.147225 kubelet[1926]: I0710 01:12:49.147193 1926 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 01:12:49.147391 kubelet[1926]: I0710 01:12:49.147224 1926 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 01:12:49.147509 kubelet[1926]: I0710 01:12:49.147396 1926 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 01:12:49.147509 kubelet[1926]: I0710 01:12:49.147404 1926 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 01:12:49.147509 kubelet[1926]: I0710 01:12:49.147480 1926 state_mem.go:36] "Initialized new in-memory state store" Jul 10 01:12:49.160830 kubelet[1926]: I0710 01:12:49.160779 1926 kubelet.go:408] "Attempting to sync node with API server" Jul 10 01:12:49.160830 kubelet[1926]: I0710 01:12:49.160810 1926 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 01:12:49.160830 kubelet[1926]: I0710 01:12:49.160831 1926 kubelet.go:314] "Adding apiserver pod source" Jul 10 01:12:49.160977 kubelet[1926]: I0710 01:12:49.160844 1926 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 01:12:49.166962 kubelet[1926]: W0710 01:12:49.166922 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:49.167099 kubelet[1926]: E0710 01:12:49.167081 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:49.177431 kubelet[1926]: I0710 01:12:49.177397 1926 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 01:12:49.177692 kubelet[1926]: I0710 01:12:49.177680 1926 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 01:12:49.177726 kubelet[1926]: W0710 01:12:49.177722 1926 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 10 01:12:49.223876 kubelet[1926]: W0710 01:12:49.223835 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:49.223972 kubelet[1926]: E0710 01:12:49.223879 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:49.224083 kubelet[1926]: I0710 01:12:49.224071 1926 server.go:1274] "Started kubelet" Jul 10 01:12:49.264313 kubelet[1926]: I0710 01:12:49.264287 1926 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 01:12:49.265017 kubelet[1926]: I0710 01:12:49.265008 1926 server.go:449] "Adding debug handlers to kubelet server" Jul 10 01:12:49.306000 audit[1926]: AVC avc: denied { mac_admin } for pid=1926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:12:49.310234 kubelet[1926]: I0710 01:12:49.308963 1926 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 10 01:12:49.310234 kubelet[1926]: I0710 01:12:49.309029 1926 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 10 01:12:49.310234 kubelet[1926]: I0710 01:12:49.309110 1926 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 01:12:49.306000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:12:49.314057 kernel: audit: type=1400 audit(1752109969.306:209): avc: denied { mac_admin } for pid=1926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:12:49.314112 kernel: audit: type=1401 audit(1752109969.306:209): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:12:49.314135 kernel: audit: type=1300 audit(1752109969.306:209): arch=c000003e syscall=188 success=no exit=-22 a0=c00092bfb0 a1=c000b14120 a2=c00092bf80 a3=25 items=0 ppid=1 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.306000 audit[1926]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00092bfb0 a1=c000b14120 a2=c00092bf80 a3=25 items=0 ppid=1 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.306000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:12:49.320570 kubelet[1926]: I0710 01:12:49.320533 1926 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 01:12:49.320816 kubelet[1926]: I0710 01:12:49.320805 1926 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 01:12:49.322476 kubelet[1926]: I0710 01:12:49.322462 1926 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 01:12:49.325343 kernel: audit: type=1327 audit(1752109969.306:209): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:12:49.325414 kernel: audit: type=1400 audit(1752109969.308:210): avc: denied { mac_admin } for pid=1926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:12:49.308000 audit[1926]: AVC avc: denied { mac_admin } for pid=1926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:12:49.327509 kubelet[1926]: I0710 01:12:49.327497 1926 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 01:12:49.308000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:12:49.308000 audit[1926]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0009b12c0 a1=c000b14138 a2=c000b16060 a3=25 items=0 ppid=1 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.308000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:12:49.329793 kubelet[1926]: I0710 01:12:49.328951 1926 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 01:12:49.329851 kubelet[1926]: I0710 01:12:49.329830 1926 reconciler.go:26] "Reconciler: start to sync state" Jul 10 01:12:49.329851 kubelet[1926]: E0710 01:12:49.329099 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:49.329952 kubelet[1926]: W0710 01:12:49.329907 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:49.329986 kubelet[1926]: E0710 01:12:49.329959 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:49.330023 kubelet[1926]: E0710 01:12:49.330007 1926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="200ms" Jul 10 01:12:49.331703 kubelet[1926]: I0710 01:12:49.331691 1926 factory.go:221] Registration of the systemd container factory successfully Jul 10 01:12:49.331841 kubelet[1926]: I0710 01:12:49.331830 1926 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 01:12:49.333008 kubelet[1926]: I0710 01:12:49.332997 1926 factory.go:221] Registration of the containerd container factory successfully Jul 10 01:12:49.332000 audit[1938]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1938 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.332000 audit[1938]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffcf56f76f0 a2=0 a3=7ffcf56f76dc items=0 ppid=1926 pid=1938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 10 01:12:49.341000 audit[1939]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.341000 audit[1939]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde1056710 a2=0 a3=7ffde10566fc items=0 ppid=1926 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.341000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 10 01:12:49.344994 kubelet[1926]: E0710 01:12:49.334756 1926 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1850bebbe30545ee default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:12:49.22405835 +0000 UTC m=+0.678903240,LastTimestamp:2025-07-10 01:12:49.22405835 +0000 UTC m=+0.678903240,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:12:49.345000 audit[1941]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.345000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fff88e089f0 a2=0 a3=7fff88e089dc items=0 ppid=1926 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 01:12:49.348000 audit[1943]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.348000 audit[1943]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe316e5cd0 a2=0 a3=7ffe316e5cbc items=0 ppid=1926 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 01:12:49.352177 kubelet[1926]: I0710 01:12:49.352163 1926 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 01:12:49.352268 kubelet[1926]: I0710 01:12:49.352260 1926 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 01:12:49.352327 kubelet[1926]: I0710 01:12:49.352318 1926 state_mem.go:36] "Initialized new in-memory state store" Jul 10 01:12:49.355756 kubelet[1926]: I0710 01:12:49.355745 1926 policy_none.go:49] "None policy: Start" Jul 10 01:12:49.356185 kubelet[1926]: I0710 01:12:49.356174 1926 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 01:12:49.356249 kubelet[1926]: I0710 01:12:49.356242 1926 state_mem.go:35] "Initializing new in-memory state store" Jul 10 01:12:49.363329 kubelet[1926]: I0710 01:12:49.363302 1926 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 01:12:49.361000 audit[1926]: AVC avc: denied { mac_admin } for pid=1926 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:12:49.361000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:12:49.361000 audit[1926]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c00088e0c0 a1=c000b15290 a2=c00088e090 a3=25 items=0 ppid=1 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.361000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:12:49.363585 kubelet[1926]: I0710 01:12:49.363413 1926 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 10 01:12:49.363585 kubelet[1926]: I0710 01:12:49.363505 1926 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 01:12:49.363585 kubelet[1926]: I0710 01:12:49.363512 1926 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 01:12:49.364143 kubelet[1926]: E0710 01:12:49.364128 1926 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 10 01:12:49.364405 kubelet[1926]: I0710 01:12:49.364393 1926 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 01:12:49.377000 audit[1949]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.377000 audit[1949]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7fff7ed80840 a2=0 a3=7fff7ed8082c items=0 ppid=1926 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jul 10 01:12:49.379940 kubelet[1926]: I0710 01:12:49.379907 1926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 01:12:49.378000 audit[1951]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:12:49.378000 audit[1951]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffd8bbf3df0 a2=0 a3=7ffd8bbf3ddc items=0 ppid=1926 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jul 10 01:12:49.380932 kubelet[1926]: I0710 01:12:49.380917 1926 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 01:12:49.380932 kubelet[1926]: I0710 01:12:49.380932 1926 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 01:12:49.380992 kubelet[1926]: I0710 01:12:49.380945 1926 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 01:12:49.381054 kubelet[1926]: E0710 01:12:49.381032 1926 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 10 01:12:49.379000 audit[1952]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.379000 audit[1952]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe350d7490 a2=0 a3=7ffe350d747c items=0 ppid=1926 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 10 01:12:49.381714 kubelet[1926]: W0710 01:12:49.381610 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:49.381752 kubelet[1926]: E0710 01:12:49.381724 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:49.380000 audit[1953]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:12:49.380000 audit[1953]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe2b7d9d40 a2=0 a3=7ffe2b7d9d2c items=0 ppid=1926 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jul 10 01:12:49.381000 audit[1954]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.381000 audit[1954]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe0ae9fcb0 a2=0 a3=7ffe0ae9fc9c items=0 ppid=1926 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 10 01:12:49.381000 audit[1955]: NETFILTER_CFG table=nat:35 family=10 entries=2 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:12:49.381000 audit[1955]: SYSCALL arch=c000003e syscall=46 success=yes exit=128 a0=3 a1=7ffcad651ab0 a2=0 a3=7ffcad651a9c items=0 ppid=1926 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jul 10 01:12:49.382000 audit[1956]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_chain pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:12:49.382000 audit[1956]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff78c62210 a2=0 a3=7fff78c621fc items=0 ppid=1926 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.382000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 10 01:12:49.383000 audit[1957]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:12:49.383000 audit[1957]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe8f2a66a0 a2=0 a3=7ffe8f2a668c items=0 ppid=1926 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:12:49.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jul 10 01:12:49.464420 kubelet[1926]: I0710 01:12:49.464363 1926 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 01:12:49.464733 kubelet[1926]: E0710 01:12:49.464720 1926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 10 01:12:49.530873 kubelet[1926]: E0710 01:12:49.530836 1926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="400ms" Jul 10 01:12:49.631384 kubelet[1926]: I0710 01:12:49.631360 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8acd60714a0f0f6f5e038fa659db2909-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8acd60714a0f0f6f5e038fa659db2909\") " pod="kube-system/kube-apiserver-localhost" Jul 10 01:12:49.631675 kubelet[1926]: I0710 01:12:49.631664 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:12:49.631746 kubelet[1926]: I0710 01:12:49.631737 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 01:12:49.631809 kubelet[1926]: I0710 01:12:49.631800 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8acd60714a0f0f6f5e038fa659db2909-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8acd60714a0f0f6f5e038fa659db2909\") " pod="kube-system/kube-apiserver-localhost" Jul 10 01:12:49.631871 kubelet[1926]: I0710 01:12:49.631862 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:12:49.631953 kubelet[1926]: I0710 01:12:49.631935 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:12:49.631989 kubelet[1926]: I0710 01:12:49.631958 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:12:49.631989 kubelet[1926]: I0710 01:12:49.631970 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:12:49.632030 kubelet[1926]: I0710 01:12:49.631980 1926 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8acd60714a0f0f6f5e038fa659db2909-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8acd60714a0f0f6f5e038fa659db2909\") " pod="kube-system/kube-apiserver-localhost" Jul 10 01:12:49.666851 kubelet[1926]: I0710 01:12:49.666836 1926 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 01:12:49.667124 kubelet[1926]: E0710 01:12:49.667109 1926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 10 01:12:49.787787 env[1363]: time="2025-07-10T01:12:49.787752957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 10 01:12:49.788368 env[1363]: time="2025-07-10T01:12:49.788265997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8acd60714a0f0f6f5e038fa659db2909,Namespace:kube-system,Attempt:0,}" Jul 10 01:12:49.789724 env[1363]: time="2025-07-10T01:12:49.789658398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 10 01:12:49.932344 kubelet[1926]: E0710 01:12:49.932316 1926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="800ms" Jul 10 01:12:50.059768 kubelet[1926]: W0710 01:12:50.059374 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:50.059768 kubelet[1926]: E0710 01:12:50.059459 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:50.068871 kubelet[1926]: I0710 01:12:50.068637 1926 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 01:12:50.068871 kubelet[1926]: E0710 01:12:50.068846 1926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 10 01:12:50.348578 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2602581652.mount: Deactivated successfully. Jul 10 01:12:50.383374 env[1363]: time="2025-07-10T01:12:50.383351518Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.391174 env[1363]: time="2025-07-10T01:12:50.391156630Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.399780 env[1363]: time="2025-07-10T01:12:50.399756977Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.402530 kubelet[1926]: W0710 01:12:50.402493 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:50.402606 kubelet[1926]: E0710 01:12:50.402540 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:50.405514 env[1363]: time="2025-07-10T01:12:50.405494824Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.408836 env[1363]: time="2025-07-10T01:12:50.408819602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.417310 env[1363]: time="2025-07-10T01:12:50.417292348Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.423977 env[1363]: time="2025-07-10T01:12:50.423948798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.424947 env[1363]: time="2025-07-10T01:12:50.424929164Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.426383 env[1363]: time="2025-07-10T01:12:50.426361665Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.433656 env[1363]: time="2025-07-10T01:12:50.433609168Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.439388 env[1363]: time="2025-07-10T01:12:50.439361143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.458408 env[1363]: time="2025-07-10T01:12:50.454191276Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:12:50.458408 env[1363]: time="2025-07-10T01:12:50.454222806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:12:50.458408 env[1363]: time="2025-07-10T01:12:50.454231587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:12:50.458408 env[1363]: time="2025-07-10T01:12:50.454395402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/38c6fe2ffb7701339c0787fc0145f3c27d488400622b32132d0a646d4a55bb9b pid=1966 runtime=io.containerd.runc.v2 Jul 10 01:12:50.458623 env[1363]: time="2025-07-10T01:12:50.458421818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:12:50.487481 env[1363]: time="2025-07-10T01:12:50.487435770Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:12:50.487647 env[1363]: time="2025-07-10T01:12:50.487617189Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:12:50.487713 env[1363]: time="2025-07-10T01:12:50.487700648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:12:50.487848 env[1363]: time="2025-07-10T01:12:50.487833481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8dcf5beaced1e2365092d211e82d524559009db97d39d280dc1e2449686a212 pid=2000 runtime=io.containerd.runc.v2 Jul 10 01:12:50.501764 env[1363]: time="2025-07-10T01:12:50.501716014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:12:50.501901 env[1363]: time="2025-07-10T01:12:50.501885563Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:12:50.501974 env[1363]: time="2025-07-10T01:12:50.501961203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:12:50.502123 env[1363]: time="2025-07-10T01:12:50.502107557Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7da47a2c0d73a548c7135430d1b2863a42eda18a2dd2186d7dea2361b48b603b pid=2030 runtime=io.containerd.runc.v2 Jul 10 01:12:50.526954 env[1363]: time="2025-07-10T01:12:50.526926692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"38c6fe2ffb7701339c0787fc0145f3c27d488400622b32132d0a646d4a55bb9b\"" Jul 10 01:12:50.528828 env[1363]: time="2025-07-10T01:12:50.528811041Z" level=info msg="CreateContainer within sandbox \"38c6fe2ffb7701339c0787fc0145f3c27d488400622b32132d0a646d4a55bb9b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 10 01:12:50.546188 env[1363]: time="2025-07-10T01:12:50.546164169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8dcf5beaced1e2365092d211e82d524559009db97d39d280dc1e2449686a212\"" Jul 10 01:12:50.547776 env[1363]: time="2025-07-10T01:12:50.547761525Z" level=info msg="CreateContainer within sandbox \"f8dcf5beaced1e2365092d211e82d524559009db97d39d280dc1e2449686a212\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 10 01:12:50.562178 env[1363]: time="2025-07-10T01:12:50.562147334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8acd60714a0f0f6f5e038fa659db2909,Namespace:kube-system,Attempt:0,} returns sandbox id \"7da47a2c0d73a548c7135430d1b2863a42eda18a2dd2186d7dea2361b48b603b\"" Jul 10 01:12:50.563363 env[1363]: time="2025-07-10T01:12:50.563345670Z" level=info msg="CreateContainer within sandbox \"7da47a2c0d73a548c7135430d1b2863a42eda18a2dd2186d7dea2361b48b603b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 10 01:12:50.610041 env[1363]: time="2025-07-10T01:12:50.609541452Z" level=info msg="CreateContainer within sandbox \"38c6fe2ffb7701339c0787fc0145f3c27d488400622b32132d0a646d4a55bb9b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c7f80148b1dd15cbd59d6e22ff09bf3b5bae95d8070822acb223d22a170cfe84\"" Jul 10 01:12:50.610289 env[1363]: time="2025-07-10T01:12:50.610269253Z" level=info msg="StartContainer for \"c7f80148b1dd15cbd59d6e22ff09bf3b5bae95d8070822acb223d22a170cfe84\"" Jul 10 01:12:50.615624 env[1363]: time="2025-07-10T01:12:50.615589634Z" level=info msg="CreateContainer within sandbox \"f8dcf5beaced1e2365092d211e82d524559009db97d39d280dc1e2449686a212\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"42c4f72e06364455d75dc5e1a2d8db5f45b4c495410e92ef5effcfafc52d9353\"" Jul 10 01:12:50.615948 env[1363]: time="2025-07-10T01:12:50.615931234Z" level=info msg="StartContainer for \"42c4f72e06364455d75dc5e1a2d8db5f45b4c495410e92ef5effcfafc52d9353\"" Jul 10 01:12:50.616199 env[1363]: time="2025-07-10T01:12:50.616184031Z" level=info msg="CreateContainer within sandbox \"7da47a2c0d73a548c7135430d1b2863a42eda18a2dd2186d7dea2361b48b603b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f\"" Jul 10 01:12:50.616437 env[1363]: time="2025-07-10T01:12:50.616424395Z" level=info msg="StartContainer for \"2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f\"" Jul 10 01:12:50.680747 env[1363]: time="2025-07-10T01:12:50.680704168Z" level=info msg="StartContainer for \"c7f80148b1dd15cbd59d6e22ff09bf3b5bae95d8070822acb223d22a170cfe84\" returns successfully" Jul 10 01:12:50.712697 env[1363]: time="2025-07-10T01:12:50.712668743Z" level=info msg="StartContainer for \"42c4f72e06364455d75dc5e1a2d8db5f45b4c495410e92ef5effcfafc52d9353\" returns successfully" Jul 10 01:12:50.732658 env[1363]: time="2025-07-10T01:12:50.728679068Z" level=info msg="StartContainer for \"2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f\" returns successfully" Jul 10 01:12:50.735011 kubelet[1926]: E0710 01:12:50.734986 1926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="1.6s" Jul 10 01:12:50.828613 kubelet[1926]: W0710 01:12:50.828567 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:50.828613 kubelet[1926]: E0710 01:12:50.828612 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:50.848195 kubelet[1926]: W0710 01:12:50.848150 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:50.848195 kubelet[1926]: E0710 01:12:50.848195 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:50.870300 kubelet[1926]: I0710 01:12:50.870069 1926 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 01:12:50.870300 kubelet[1926]: E0710 01:12:50.870241 1926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 10 01:12:51.316461 kubelet[1926]: E0710 01:12:51.316431 1926 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.102:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:51.344264 systemd[1]: run-containerd-runc-k8s.io-38c6fe2ffb7701339c0787fc0145f3c27d488400622b32132d0a646d4a55bb9b-runc.uCuC1L.mount: Deactivated successfully. Jul 10 01:12:51.762730 kubelet[1926]: W0710 01:12:51.762622 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:51.763107 kubelet[1926]: E0710 01:12:51.763090 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:52.335696 kubelet[1926]: E0710 01:12:52.335668 1926 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="3.2s" Jul 10 01:12:52.472071 kubelet[1926]: I0710 01:12:52.472054 1926 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 01:12:52.472391 kubelet[1926]: E0710 01:12:52.472369 1926 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://139.178.70.102:6443/api/v1/nodes\": dial tcp 139.178.70.102:6443: connect: connection refused" node="localhost" Jul 10 01:12:52.873752 kubelet[1926]: W0710 01:12:52.873711 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:52.874061 kubelet[1926]: E0710 01:12:52.874046 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:53.134131 kubelet[1926]: W0710 01:12:53.134056 1926 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:12:53.134265 kubelet[1926]: E0710 01:12:53.134248 1926 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:12:54.887379 kubelet[1926]: E0710 01:12:54.887353 1926 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 10 01:12:55.227907 kubelet[1926]: E0710 01:12:55.227838 1926 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 10 01:12:55.539317 kubelet[1926]: E0710 01:12:55.539283 1926 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 10 01:12:55.667283 kubelet[1926]: E0710 01:12:55.667264 1926 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Jul 10 01:12:55.673470 kubelet[1926]: I0710 01:12:55.673455 1926 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 01:12:55.690651 kubelet[1926]: I0710 01:12:55.690620 1926 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 01:12:55.690799 kubelet[1926]: E0710 01:12:55.690788 1926 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 10 01:12:55.696573 kubelet[1926]: E0710 01:12:55.696546 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:55.797064 kubelet[1926]: E0710 01:12:55.796977 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:55.897735 kubelet[1926]: E0710 01:12:55.897699 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:55.998228 kubelet[1926]: E0710 01:12:55.998197 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.098923 kubelet[1926]: E0710 01:12:56.098845 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.199951 kubelet[1926]: E0710 01:12:56.199915 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.300963 kubelet[1926]: E0710 01:12:56.300933 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.401552 kubelet[1926]: E0710 01:12:56.401467 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.501986 kubelet[1926]: E0710 01:12:56.501960 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.602473 kubelet[1926]: E0710 01:12:56.602447 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.703448 kubelet[1926]: E0710 01:12:56.703372 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.729989 systemd[1]: Reloading. Jul 10 01:12:56.804062 kubelet[1926]: E0710 01:12:56.804034 1926 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:12:56.810224 /usr/lib/systemd/system-generators/torcx-generator[2224]: time="2025-07-10T01:12:56Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Jul 10 01:12:56.810242 /usr/lib/systemd/system-generators/torcx-generator[2224]: time="2025-07-10T01:12:56Z" level=info msg="torcx already run" Jul 10 01:12:56.851428 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 10 01:12:56.851562 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 10 01:12:56.867636 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 10 01:12:56.935422 systemd[1]: Stopping kubelet.service... Jul 10 01:12:56.946952 systemd[1]: kubelet.service: Deactivated successfully. Jul 10 01:12:56.947183 systemd[1]: Stopped kubelet.service. Jul 10 01:12:56.949247 kernel: kauditd_printk_skb: 43 callbacks suppressed Jul 10 01:12:56.949293 kernel: audit: type=1131 audit(1752109976.945:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:56.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:12:56.952361 systemd[1]: Starting kubelet.service... Jul 10 01:13:00.080300 systemd[1]: Started kubelet.service. Jul 10 01:13:00.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:13:00.088917 kernel: audit: type=1130 audit(1752109980.078:225): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:13:00.199704 kubelet[2299]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 01:13:00.199704 kubelet[2299]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 10 01:13:00.199704 kubelet[2299]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 10 01:13:00.200003 kubelet[2299]: I0710 01:13:00.199739 2299 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 10 01:13:00.236600 kubelet[2299]: I0710 01:13:00.236580 2299 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 10 01:13:00.236718 kubelet[2299]: I0710 01:13:00.236709 2299 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 10 01:13:00.236935 kubelet[2299]: I0710 01:13:00.236926 2299 server.go:934] "Client rotation is on, will bootstrap in background" Jul 10 01:13:00.238540 kubelet[2299]: I0710 01:13:00.238529 2299 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 10 01:13:00.365217 kubelet[2299]: I0710 01:13:00.364768 2299 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 10 01:13:00.366799 kubelet[2299]: E0710 01:13:00.366778 2299 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 10 01:13:00.366799 kubelet[2299]: I0710 01:13:00.366798 2299 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 10 01:13:00.371363 kubelet[2299]: I0710 01:13:00.371337 2299 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 10 01:13:00.387089 kubelet[2299]: I0710 01:13:00.387055 2299 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 10 01:13:00.387545 kubelet[2299]: I0710 01:13:00.387515 2299 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 10 01:13:00.387881 kubelet[2299]: I0710 01:13:00.387620 2299 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jul 10 01:13:00.387881 kubelet[2299]: I0710 01:13:00.387865 2299 topology_manager.go:138] "Creating topology manager with none policy" Jul 10 01:13:00.387881 kubelet[2299]: I0710 01:13:00.387873 2299 container_manager_linux.go:300] "Creating device plugin manager" Jul 10 01:13:00.388009 kubelet[2299]: I0710 01:13:00.387900 2299 state_mem.go:36] "Initialized new in-memory state store" Jul 10 01:13:00.388009 kubelet[2299]: I0710 01:13:00.387977 2299 kubelet.go:408] "Attempting to sync node with API server" Jul 10 01:13:00.388009 kubelet[2299]: I0710 01:13:00.387986 2299 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 10 01:13:00.388009 kubelet[2299]: I0710 01:13:00.388004 2299 kubelet.go:314] "Adding apiserver pod source" Jul 10 01:13:00.388091 kubelet[2299]: I0710 01:13:00.388013 2299 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 10 01:13:00.388728 kubelet[2299]: I0710 01:13:00.388714 2299 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 10 01:13:00.389018 kubelet[2299]: I0710 01:13:00.388999 2299 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 10 01:13:00.389694 kubelet[2299]: I0710 01:13:00.389486 2299 server.go:1274] "Started kubelet" Jul 10 01:13:00.391000 audit[2299]: AVC avc: denied { mac_admin } for pid=2299 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:00.391000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:13:00.396031 kubelet[2299]: I0710 01:13:00.396010 2299 kubelet.go:1430] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Jul 10 01:13:00.396124 kubelet[2299]: I0710 01:13:00.396110 2299 kubelet.go:1434] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Jul 10 01:13:00.396189 kubelet[2299]: I0710 01:13:00.396181 2299 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 10 01:13:00.396883 kernel: audit: type=1400 audit(1752109980.391:226): avc: denied { mac_admin } for pid=2299 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:00.396939 kernel: audit: type=1401 audit(1752109980.391:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:13:00.396955 kernel: audit: type=1300 audit(1752109980.391:226): arch=c000003e syscall=188 success=no exit=-22 a0=c000acd7d0 a1=c000abca68 a2=c000acd7a0 a3=25 items=0 ppid=1 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:00.391000 audit[2299]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000acd7d0 a1=c000abca68 a2=c000acd7a0 a3=25 items=0 ppid=1 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:00.391000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:13:00.403859 kernel: audit: type=1327 audit(1752109980.391:226): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:13:00.404736 kernel: audit: type=1400 audit(1752109980.394:227): avc: denied { mac_admin } for pid=2299 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:00.394000 audit[2299]: AVC avc: denied { mac_admin } for pid=2299 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:00.406801 kernel: audit: type=1401 audit(1752109980.394:227): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:13:00.394000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:13:00.394000 audit[2299]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000af8880 a1=c000abca80 a2=c000acd860 a3=25 items=0 ppid=1 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:00.411338 kernel: audit: type=1300 audit(1752109980.394:227): arch=c000003e syscall=188 success=no exit=-22 a0=c000af8880 a1=c000abca80 a2=c000acd860 a3=25 items=0 ppid=1 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:00.412469 kernel: audit: type=1327 audit(1752109980.394:227): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:13:00.394000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:13:00.436208 kubelet[2299]: I0710 01:13:00.436182 2299 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 10 01:13:00.437033 kubelet[2299]: I0710 01:13:00.437019 2299 server.go:449] "Adding debug handlers to kubelet server" Jul 10 01:13:00.437873 kubelet[2299]: I0710 01:13:00.437852 2299 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 10 01:13:00.438502 kubelet[2299]: I0710 01:13:00.437978 2299 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 10 01:13:00.438502 kubelet[2299]: I0710 01:13:00.438166 2299 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 10 01:13:00.486580 kubelet[2299]: I0710 01:13:00.486565 2299 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 10 01:13:00.487455 kubelet[2299]: I0710 01:13:00.487447 2299 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 10 01:13:00.487700 kubelet[2299]: E0710 01:13:00.486981 2299 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 10 01:13:00.487807 kubelet[2299]: I0710 01:13:00.487800 2299 reconciler.go:26] "Reconciler: start to sync state" Jul 10 01:13:00.488144 kubelet[2299]: I0710 01:13:00.488134 2299 factory.go:221] Registration of the systemd container factory successfully Jul 10 01:13:00.488241 kubelet[2299]: I0710 01:13:00.488230 2299 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 10 01:13:00.488975 kubelet[2299]: I0710 01:13:00.488968 2299 factory.go:221] Registration of the containerd container factory successfully Jul 10 01:13:00.493959 kubelet[2299]: I0710 01:13:00.492686 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 10 01:13:00.493959 kubelet[2299]: I0710 01:13:00.493410 2299 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 10 01:13:00.493959 kubelet[2299]: I0710 01:13:00.493977 2299 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 10 01:13:00.493959 kubelet[2299]: I0710 01:13:00.493991 2299 kubelet.go:2321] "Starting kubelet main sync loop" Jul 10 01:13:00.494121 kubelet[2299]: E0710 01:13:00.494023 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 10 01:13:00.565760 kubelet[2299]: I0710 01:13:00.565737 2299 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 10 01:13:00.565760 kubelet[2299]: I0710 01:13:00.565753 2299 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 10 01:13:00.566577 kubelet[2299]: I0710 01:13:00.565768 2299 state_mem.go:36] "Initialized new in-memory state store" Jul 10 01:13:00.566577 kubelet[2299]: I0710 01:13:00.565930 2299 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 10 01:13:00.566577 kubelet[2299]: I0710 01:13:00.565941 2299 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 10 01:13:00.566577 kubelet[2299]: I0710 01:13:00.565958 2299 policy_none.go:49] "None policy: Start" Jul 10 01:13:00.566577 kubelet[2299]: I0710 01:13:00.566301 2299 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 10 01:13:00.566577 kubelet[2299]: I0710 01:13:00.566313 2299 state_mem.go:35] "Initializing new in-memory state store" Jul 10 01:13:00.566577 kubelet[2299]: I0710 01:13:00.566414 2299 state_mem.go:75] "Updated machine memory state" Jul 10 01:13:00.567184 kubelet[2299]: I0710 01:13:00.567169 2299 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 10 01:13:00.565000 audit[2299]: AVC avc: denied { mac_admin } for pid=2299 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:00.565000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Jul 10 01:13:00.565000 audit[2299]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000e6eae0 a1=c00082f7e8 a2=c000e6eab0 a3=25 items=0 ppid=1 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:00.565000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Jul 10 01:13:00.568021 kubelet[2299]: I0710 01:13:00.567958 2299 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Jul 10 01:13:00.568085 kubelet[2299]: I0710 01:13:00.568072 2299 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 10 01:13:00.568112 kubelet[2299]: I0710 01:13:00.568090 2299 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 10 01:13:00.570698 kubelet[2299]: I0710 01:13:00.570689 2299 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 10 01:13:00.673776 kubelet[2299]: I0710 01:13:00.673715 2299 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 10 01:13:00.685542 kubelet[2299]: I0710 01:13:00.685523 2299 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 10 01:13:00.685695 kubelet[2299]: I0710 01:13:00.685688 2299 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 10 01:13:00.789222 kubelet[2299]: I0710 01:13:00.789197 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 10 01:13:00.789222 kubelet[2299]: I0710 01:13:00.789224 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8acd60714a0f0f6f5e038fa659db2909-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8acd60714a0f0f6f5e038fa659db2909\") " pod="kube-system/kube-apiserver-localhost" Jul 10 01:13:00.789355 kubelet[2299]: I0710 01:13:00.789241 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:13:00.789355 kubelet[2299]: I0710 01:13:00.789262 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:13:00.789355 kubelet[2299]: I0710 01:13:00.789275 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:13:00.789355 kubelet[2299]: I0710 01:13:00.789290 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:13:00.789355 kubelet[2299]: I0710 01:13:00.789305 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 10 01:13:00.789452 kubelet[2299]: I0710 01:13:00.789317 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8acd60714a0f0f6f5e038fa659db2909-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8acd60714a0f0f6f5e038fa659db2909\") " pod="kube-system/kube-apiserver-localhost" Jul 10 01:13:00.789452 kubelet[2299]: I0710 01:13:00.789346 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8acd60714a0f0f6f5e038fa659db2909-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8acd60714a0f0f6f5e038fa659db2909\") " pod="kube-system/kube-apiserver-localhost" Jul 10 01:13:00.795882 kubelet[2299]: I0710 01:13:00.795867 2299 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 10 01:13:00.796207 env[1363]: time="2025-07-10T01:13:00.796161649Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 10 01:13:00.796453 kubelet[2299]: I0710 01:13:00.796443 2299 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 10 01:13:01.501650 kubelet[2299]: I0710 01:13:01.501580 2299 apiserver.go:52] "Watching apiserver" Jul 10 01:13:01.550307 kubelet[2299]: E0710 01:13:01.550115 2299 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 10 01:13:01.560488 kubelet[2299]: I0710 01:13:01.560450 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.560435772 podStartE2EDuration="1.560435772s" podCreationTimestamp="2025-07-10 01:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 01:13:01.556531211 +0000 UTC m=+1.424958095" watchObservedRunningTime="2025-07-10 01:13:01.560435772 +0000 UTC m=+1.428862657" Jul 10 01:13:01.566738 kubelet[2299]: I0710 01:13:01.566695 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.566682242 podStartE2EDuration="1.566682242s" podCreationTimestamp="2025-07-10 01:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 01:13:01.560779835 +0000 UTC m=+1.429206719" watchObservedRunningTime="2025-07-10 01:13:01.566682242 +0000 UTC m=+1.435109129" Jul 10 01:13:01.575865 kubelet[2299]: I0710 01:13:01.575829 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.575818771 podStartE2EDuration="1.575818771s" podCreationTimestamp="2025-07-10 01:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 01:13:01.567232192 +0000 UTC m=+1.435659075" watchObservedRunningTime="2025-07-10 01:13:01.575818771 +0000 UTC m=+1.444245658" Jul 10 01:13:01.588026 kubelet[2299]: I0710 01:13:01.588003 2299 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 10 01:13:01.895412 kubelet[2299]: W0710 01:13:01.895388 2299 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jul 10 01:13:01.895573 kubelet[2299]: E0710 01:13:01.895556 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 10 01:13:01.895794 kubelet[2299]: W0710 01:13:01.895773 2299 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'localhost' and this object Jul 10 01:13:01.895868 kubelet[2299]: E0710 01:13:01.895847 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 10 01:13:01.896196 kubelet[2299]: I0710 01:13:01.896180 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy\") pod \"kube-proxy-rxvps\" (UID: \"22eb6a01-1430-4380-b1df-6cb2ed0c8d8b\") " pod="kube-system/kube-proxy-rxvps" Jul 10 01:13:01.896236 kubelet[2299]: I0710 01:13:01.896203 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-xtables-lock\") pod \"kube-proxy-rxvps\" (UID: \"22eb6a01-1430-4380-b1df-6cb2ed0c8d8b\") " pod="kube-system/kube-proxy-rxvps" Jul 10 01:13:01.896236 kubelet[2299]: I0710 01:13:01.896219 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-lib-modules\") pod \"kube-proxy-rxvps\" (UID: \"22eb6a01-1430-4380-b1df-6cb2ed0c8d8b\") " pod="kube-system/kube-proxy-rxvps" Jul 10 01:13:01.896236 kubelet[2299]: I0710 01:13:01.896232 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpcvh\" (UniqueName: \"kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh\") pod \"kube-proxy-rxvps\" (UID: \"22eb6a01-1430-4380-b1df-6cb2ed0c8d8b\") " pod="kube-system/kube-proxy-rxvps" Jul 10 01:13:01.996734 kubelet[2299]: I0710 01:13:01.996705 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9c135a1b-00bf-4e6f-87fa-9ac292c6a135-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-twgs2\" (UID: \"9c135a1b-00bf-4e6f-87fa-9ac292c6a135\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" Jul 10 01:13:01.997033 kubelet[2299]: I0710 01:13:01.997022 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj7k8\" (UniqueName: \"kubernetes.io/projected/9c135a1b-00bf-4e6f-87fa-9ac292c6a135-kube-api-access-mj7k8\") pod \"tigera-operator-5bf8dfcb4-twgs2\" (UID: \"9c135a1b-00bf-4e6f-87fa-9ac292c6a135\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" Jul 10 01:13:02.001073 kubelet[2299]: I0710 01:13:02.001047 2299 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jul 10 01:13:02.151816 env[1363]: time="2025-07-10T01:13:02.151743982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxvps,Uid:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b,Namespace:kube-system,Attempt:0,}" Jul 10 01:13:02.166518 env[1363]: time="2025-07-10T01:13:02.166472217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:02.166618 env[1363]: time="2025-07-10T01:13:02.166501732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:02.166618 env[1363]: time="2025-07-10T01:13:02.166510495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:02.166794 env[1363]: time="2025-07-10T01:13:02.166725903Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a44d4058084a14aa6d14f37348deb80ee3f8422fd6e9cf19cea79e1410fb538 pid=2348 runtime=io.containerd.runc.v2 Jul 10 01:13:02.198020 env[1363]: time="2025-07-10T01:13:02.197897198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rxvps,Uid:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a44d4058084a14aa6d14f37348deb80ee3f8422fd6e9cf19cea79e1410fb538\"" Jul 10 01:13:02.200433 env[1363]: time="2025-07-10T01:13:02.200397958Z" level=info msg="CreateContainer within sandbox \"8a44d4058084a14aa6d14f37348deb80ee3f8422fd6e9cf19cea79e1410fb538\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 10 01:13:02.214800 env[1363]: time="2025-07-10T01:13:02.214751379Z" level=info msg="CreateContainer within sandbox \"8a44d4058084a14aa6d14f37348deb80ee3f8422fd6e9cf19cea79e1410fb538\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"aac295df51af4fbd3b447eea929508f8865ae15f9b6a12f078e5ecd5456e2160\"" Jul 10 01:13:02.215340 env[1363]: time="2025-07-10T01:13:02.215325730Z" level=info msg="StartContainer for \"aac295df51af4fbd3b447eea929508f8865ae15f9b6a12f078e5ecd5456e2160\"" Jul 10 01:13:02.251043 env[1363]: time="2025-07-10T01:13:02.251010438Z" level=info msg="StartContainer for \"aac295df51af4fbd3b447eea929508f8865ae15f9b6a12f078e5ecd5456e2160\" returns successfully" Jul 10 01:13:03.007750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753486834.mount: Deactivated successfully. Jul 10 01:13:03.103426 kubelet[2299]: E0710 01:13:03.103397 2299 projected.go:288] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:13:03.103716 kubelet[2299]: E0710 01:13:03.103705 2299 projected.go:194] Error preparing data for projected volume kube-api-access-mj7k8 for pod tigera-operator/tigera-operator-5bf8dfcb4-twgs2: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:13:03.103822 kubelet[2299]: E0710 01:13:03.103811 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9c135a1b-00bf-4e6f-87fa-9ac292c6a135-kube-api-access-mj7k8 podName:9c135a1b-00bf-4e6f-87fa-9ac292c6a135 nodeName:}" failed. No retries permitted until 2025-07-10 01:13:03.603793105 +0000 UTC m=+3.472219984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-mj7k8" (UniqueName: "kubernetes.io/projected/9c135a1b-00bf-4e6f-87fa-9ac292c6a135-kube-api-access-mj7k8") pod "tigera-operator-5bf8dfcb4-twgs2" (UID: "9c135a1b-00bf-4e6f-87fa-9ac292c6a135") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:13:03.342594 kubelet[2299]: I0710 01:13:03.342560 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rxvps" podStartSLOduration=2.342547922 podStartE2EDuration="2.342547922s" podCreationTimestamp="2025-07-10 01:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 01:13:02.551275963 +0000 UTC m=+2.419702849" watchObservedRunningTime="2025-07-10 01:13:03.342547922 +0000 UTC m=+3.210974814" Jul 10 01:13:03.570265 kernel: kauditd_printk_skb: 4 callbacks suppressed Jul 10 01:13:03.570353 kernel: audit: type=1325 audit(1752109983.566:229): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.566000 audit[2448]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.574344 kernel: audit: type=1300 audit(1752109983.566:229): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5fb4e3e0 a2=0 a3=7ffd5fb4e3cc items=0 ppid=2398 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.566000 audit[2448]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd5fb4e3e0 a2=0 a3=7ffd5fb4e3cc items=0 ppid=2398 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 01:13:03.576433 kernel: audit: type=1327 audit(1752109983.566:229): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 01:13:03.576463 kernel: audit: type=1325 audit(1752109983.566:230): table=nat:39 family=2 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.566000 audit[2449]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.566000 audit[2449]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd339fd7b0 a2=0 a3=7ffd339fd79c items=0 ppid=2398 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.582104 kernel: audit: type=1300 audit(1752109983.566:230): arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd339fd7b0 a2=0 a3=7ffd339fd79c items=0 ppid=2398 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.582148 kernel: audit: type=1327 audit(1752109983.566:230): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 01:13:03.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 01:13:03.566000 audit[2450]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.585805 kernel: audit: type=1325 audit(1752109983.566:231): table=filter:40 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.585835 kernel: audit: type=1300 audit(1752109983.566:231): arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe84a16fa0 a2=0 a3=7ffe84a16f8c items=0 ppid=2398 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.566000 audit[2450]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe84a16fa0 a2=0 a3=7ffe84a16f8c items=0 ppid=2398 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.566000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 10 01:13:03.591597 kernel: audit: type=1327 audit(1752109983.566:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 10 01:13:03.566000 audit[2451]: NETFILTER_CFG table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.566000 audit[2451]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffccc523bd0 a2=0 a3=7ffccc523bbc items=0 ppid=2398 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.594672 kernel: audit: type=1325 audit(1752109983.566:232): table=mangle:41 family=10 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jul 10 01:13:03.566000 audit[2452]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.566000 audit[2452]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcdb18b430 a2=0 a3=7ffcdb18b41c items=0 ppid=2398 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jul 10 01:13:03.569000 audit[2453]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2453 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.569000 audit[2453]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe86daf7c0 a2=0 a3=7ffe86daf7ac items=0 ppid=2398 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.569000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jul 10 01:13:03.695356 env[1363]: time="2025-07-10T01:13:03.695314887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-twgs2,Uid:9c135a1b-00bf-4e6f-87fa-9ac292c6a135,Namespace:tigera-operator,Attempt:0,}" Jul 10 01:13:03.703000 audit[2456]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.703000 audit[2456]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffe7e534a60 a2=0 a3=7ffe7e534a4c items=0 ppid=2398 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.703000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 10 01:13:03.709525 env[1363]: time="2025-07-10T01:13:03.709470545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:03.709661 env[1363]: time="2025-07-10T01:13:03.709628980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:03.709752 env[1363]: time="2025-07-10T01:13:03.709734122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:03.709961 env[1363]: time="2025-07-10T01:13:03.709929502Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01443d9289a0bbe23feae26cd6280fa2fd433168d943a8fed752302c7264f2ab pid=2465 runtime=io.containerd.runc.v2 Jul 10 01:13:03.712000 audit[2476]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.712000 audit[2476]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fffdd13d050 a2=0 a3=7fffdd13d03c items=0 ppid=2398 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.712000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jul 10 01:13:03.723000 audit[2487]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.723000 audit[2487]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7fff79639d90 a2=0 a3=7fff79639d7c items=0 ppid=2398 pid=2487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jul 10 01:13:03.728000 audit[2488]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.728000 audit[2488]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe995a8e70 a2=0 a3=7ffe995a8e5c items=0 ppid=2398 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 10 01:13:03.731000 audit[2492]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.731000 audit[2492]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe79a15e50 a2=0 a3=7ffe79a15e3c items=0 ppid=2398 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 10 01:13:03.732000 audit[2497]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.732000 audit[2497]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe4fa4de80 a2=0 a3=7ffe4fa4de6c items=0 ppid=2398 pid=2497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 10 01:13:03.734000 audit[2500]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.734000 audit[2500]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe980fff40 a2=0 a3=7ffe980fff2c items=0 ppid=2398 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.734000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 10 01:13:03.738000 audit[2503]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2503 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.738000 audit[2503]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff4408c1e0 a2=0 a3=7fff4408c1cc items=0 ppid=2398 pid=2503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.738000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jul 10 01:13:03.740000 audit[2504]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.740000 audit[2504]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd6c9750f0 a2=0 a3=7ffd6c9750dc items=0 ppid=2398 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 10 01:13:03.743000 audit[2506]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.743000 audit[2506]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffba10d4d0 a2=0 a3=7fffba10d4bc items=0 ppid=2398 pid=2506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 10 01:13:03.744000 audit[2507]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.744000 audit[2507]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe361594b0 a2=0 a3=7ffe3615949c items=0 ppid=2398 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.744000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 10 01:13:03.747000 audit[2509]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2509 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.747000 audit[2509]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffb32cbbc0 a2=0 a3=7fffb32cbbac items=0 ppid=2398 pid=2509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 01:13:03.752000 audit[2512]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2512 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.752000 audit[2512]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe60a82890 a2=0 a3=7ffe60a8287c items=0 ppid=2398 pid=2512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 01:13:03.759000 audit[2518]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2518 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.759000 audit[2518]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffce2c13130 a2=0 a3=7ffce2c1311c items=0 ppid=2398 pid=2518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 10 01:13:03.760000 audit[2524]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2524 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.760000 audit[2524]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fffea307d00 a2=0 a3=7fffea307cec items=0 ppid=2398 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.760000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 10 01:13:03.763510 env[1363]: time="2025-07-10T01:13:03.763485569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-twgs2,Uid:9c135a1b-00bf-4e6f-87fa-9ac292c6a135,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"01443d9289a0bbe23feae26cd6280fa2fd433168d943a8fed752302c7264f2ab\"" Jul 10 01:13:03.764000 audit[2526]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.764000 audit[2526]: SYSCALL arch=c000003e syscall=46 success=yes exit=524 a0=3 a1=7ffe9e8b8da0 a2=0 a3=7ffe9e8b8d8c items=0 ppid=2398 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.764000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 01:13:03.766000 audit[2529]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.766000 audit[2529]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd6db8fb30 a2=0 a3=7ffd6db8fb1c items=0 ppid=2398 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.766000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 01:13:03.767000 audit[2530]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.767000 audit[2530]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe12f36090 a2=0 a3=7ffe12f3607c items=0 ppid=2398 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.767000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 10 01:13:03.769240 env[1363]: time="2025-07-10T01:13:03.769223490Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 10 01:13:03.769000 audit[2532]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jul 10 01:13:03.769000 audit[2532]: SYSCALL arch=c000003e syscall=46 success=yes exit=532 a0=3 a1=7ffecd78b0d0 a2=0 a3=7ffecd78b0bc items=0 ppid=2398 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.769000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 10 01:13:03.806000 audit[2538]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:03.806000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd74feea50 a2=0 a3=7ffd74feea3c items=0 ppid=2398 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.806000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:03.816000 audit[2538]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2538 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:03.816000 audit[2538]: SYSCALL arch=c000003e syscall=46 success=yes exit=5508 a0=3 a1=7ffd74feea50 a2=0 a3=7ffd74feea3c items=0 ppid=2398 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:03.817000 audit[2543]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2543 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.817000 audit[2543]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffd0dac8260 a2=0 a3=7ffd0dac824c items=0 ppid=2398 pid=2543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.817000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jul 10 01:13:03.818000 audit[2545]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.818000 audit[2545]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd946e4380 a2=0 a3=7ffd946e436c items=0 ppid=2398 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jul 10 01:13:03.821000 audit[2548]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.821000 audit[2548]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffe5fb4f230 a2=0 a3=7ffe5fb4f21c items=0 ppid=2398 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jul 10 01:13:03.822000 audit[2549]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.822000 audit[2549]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffa91ff990 a2=0 a3=7fffa91ff97c items=0 ppid=2398 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.822000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jul 10 01:13:03.823000 audit[2551]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2551 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.823000 audit[2551]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffc7a7c81e0 a2=0 a3=7ffc7a7c81cc items=0 ppid=2398 pid=2551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jul 10 01:13:03.824000 audit[2552]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2552 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.824000 audit[2552]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeea18c490 a2=0 a3=7ffeea18c47c items=0 ppid=2398 pid=2552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jul 10 01:13:03.826000 audit[2554]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.826000 audit[2554]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc993c8930 a2=0 a3=7ffc993c891c items=0 ppid=2398 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.826000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jul 10 01:13:03.828000 audit[2557]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.828000 audit[2557]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffdcef6fd30 a2=0 a3=7ffdcef6fd1c items=0 ppid=2398 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.828000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jul 10 01:13:03.829000 audit[2558]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.829000 audit[2558]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff2d850b40 a2=0 a3=7fff2d850b2c items=0 ppid=2398 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.829000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jul 10 01:13:03.830000 audit[2560]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.830000 audit[2560]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffeaaef0e80 a2=0 a3=7ffeaaef0e6c items=0 ppid=2398 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.830000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jul 10 01:13:03.831000 audit[2561]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.831000 audit[2561]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6064d710 a2=0 a3=7ffc6064d6fc items=0 ppid=2398 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.831000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jul 10 01:13:03.833000 audit[2563]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.833000 audit[2563]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffc9d876b0 a2=0 a3=7fffc9d8769c items=0 ppid=2398 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.833000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jul 10 01:13:03.836000 audit[2566]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.836000 audit[2566]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc77289b60 a2=0 a3=7ffc77289b4c items=0 ppid=2398 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.836000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jul 10 01:13:03.838000 audit[2569]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.838000 audit[2569]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd68389050 a2=0 a3=7ffd6838903c items=0 ppid=2398 pid=2569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.838000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jul 10 01:13:03.839000 audit[2570]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2570 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.839000 audit[2570]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd8d66b7e0 a2=0 a3=7ffd8d66b7cc items=0 ppid=2398 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.839000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jul 10 01:13:03.840000 audit[2572]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.840000 audit[2572]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffee15e8580 a2=0 a3=7ffee15e856c items=0 ppid=2398 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.840000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 01:13:03.842000 audit[2575]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.842000 audit[2575]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff2333c830 a2=0 a3=7fff2333c81c items=0 ppid=2398 pid=2575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.842000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jul 10 01:13:03.843000 audit[2576]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2576 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.843000 audit[2576]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcb60e40a0 a2=0 a3=7ffcb60e408c items=0 ppid=2398 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.843000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jul 10 01:13:03.846000 audit[2578]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.846000 audit[2578]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffc3b170440 a2=0 a3=7ffc3b17042c items=0 ppid=2398 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.846000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jul 10 01:13:03.847000 audit[2579]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.847000 audit[2579]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffde703c490 a2=0 a3=7ffde703c47c items=0 ppid=2398 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.847000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jul 10 01:13:03.849000 audit[2581]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.849000 audit[2581]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffc8643f320 a2=0 a3=7ffc8643f30c items=0 ppid=2398 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.849000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 01:13:03.851000 audit[2584]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jul 10 01:13:03.851000 audit[2584]: SYSCALL arch=c000003e syscall=46 success=yes exit=228 a0=3 a1=7ffe504c3da0 a2=0 a3=7ffe504c3d8c items=0 ppid=2398 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.851000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jul 10 01:13:03.854000 audit[2586]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 10 01:13:03.854000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=2088 a0=3 a1=7ffcf691eb80 a2=0 a3=7ffcf691eb6c items=0 ppid=2398 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.854000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:03.854000 audit[2586]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jul 10 01:13:03.854000 audit[2586]: SYSCALL arch=c000003e syscall=46 success=yes exit=2056 a0=3 a1=7ffcf691eb80 a2=0 a3=7ffcf691eb6c items=0 ppid=2398 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:03.854000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:04.007017 systemd[1]: run-containerd-runc-k8s.io-01443d9289a0bbe23feae26cd6280fa2fd433168d943a8fed752302c7264f2ab-runc.TSOzkN.mount: Deactivated successfully. Jul 10 01:13:05.374204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629200504.mount: Deactivated successfully. Jul 10 01:13:06.037070 env[1363]: time="2025-07-10T01:13:06.037032399Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:06.041209 env[1363]: time="2025-07-10T01:13:06.041186968Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:06.043685 env[1363]: time="2025-07-10T01:13:06.043664004Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.38.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:06.047827 env[1363]: time="2025-07-10T01:13:06.047809790Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:06.048151 env[1363]: time="2025-07-10T01:13:06.048133822Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 10 01:13:06.051294 env[1363]: time="2025-07-10T01:13:06.051256212Z" level=info msg="CreateContainer within sandbox \"01443d9289a0bbe23feae26cd6280fa2fd433168d943a8fed752302c7264f2ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 10 01:13:06.069149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3521177278.mount: Deactivated successfully. Jul 10 01:13:06.083325 env[1363]: time="2025-07-10T01:13:06.083296800Z" level=info msg="CreateContainer within sandbox \"01443d9289a0bbe23feae26cd6280fa2fd433168d943a8fed752302c7264f2ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532\"" Jul 10 01:13:06.083860 env[1363]: time="2025-07-10T01:13:06.083845225Z" level=info msg="StartContainer for \"1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532\"" Jul 10 01:13:06.128386 env[1363]: time="2025-07-10T01:13:06.128357500Z" level=info msg="StartContainer for \"1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532\" returns successfully" Jul 10 01:13:06.336189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3121950832.mount: Deactivated successfully. Jul 10 01:13:06.557424 kubelet[2299]: I0710 01:13:06.557312 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" podStartSLOduration=3.272395334 podStartE2EDuration="5.557287318s" podCreationTimestamp="2025-07-10 01:13:01 +0000 UTC" firstStartedPulling="2025-07-10 01:13:03.76419017 +0000 UTC m=+3.632617052" lastFinishedPulling="2025-07-10 01:13:06.04908215 +0000 UTC m=+5.917509036" observedRunningTime="2025-07-10 01:13:06.557043324 +0000 UTC m=+6.425470225" watchObservedRunningTime="2025-07-10 01:13:06.557287318 +0000 UTC m=+6.425714216" Jul 10 01:13:11.592000 audit[1640]: USER_END pid=1640 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:13:11.600240 kernel: kauditd_printk_skb: 143 callbacks suppressed Jul 10 01:13:11.600286 kernel: audit: type=1106 audit(1752109991.592:280): pid=1640 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:13:11.603408 kernel: audit: type=1104 audit(1752109991.592:281): pid=1640 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:13:11.592000 audit[1640]: CRED_DISP pid=1640 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jul 10 01:13:11.593613 sudo[1640]: pam_unix(sudo:session): session closed for user root Jul 10 01:13:11.616381 sshd[1634]: pam_unix(sshd:session): session closed for user core Jul 10 01:13:11.619000 audit[1634]: USER_END pid=1634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:13:11.620000 audit[1634]: CRED_DISP pid=1634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:13:11.630214 kernel: audit: type=1106 audit(1752109991.619:282): pid=1634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:13:11.630283 kernel: audit: type=1104 audit(1752109991.620:283): pid=1634 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:13:11.636754 systemd[1]: sshd@6-139.178.70.102:22-139.178.68.195:56310.service: Deactivated successfully. Jul 10 01:13:11.637719 systemd[1]: session-9.scope: Deactivated successfully. Jul 10 01:13:11.638001 systemd-logind[1351]: Session 9 logged out. Waiting for processes to exit. Jul 10 01:13:11.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-139.178.68.195:56310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:13:11.642733 kernel: audit: type=1131 audit(1752109991.635:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-139.178.70.102:22-139.178.68.195:56310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:13:11.642732 systemd-logind[1351]: Removed session 9. Jul 10 01:13:12.142000 audit[2670]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:12.142000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8d432570 a2=0 a3=7ffe8d43255c items=0 ppid=2398 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:12.150916 kernel: audit: type=1325 audit(1752109992.142:285): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:12.150976 kernel: audit: type=1300 audit(1752109992.142:285): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe8d432570 a2=0 a3=7ffe8d43255c items=0 ppid=2398 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:12.142000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:12.153266 kernel: audit: type=1327 audit(1752109992.142:285): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:12.152000 audit[2670]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:12.158439 kernel: audit: type=1325 audit(1752109992.152:286): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2670 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:12.152000 audit[2670]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8d432570 a2=0 a3=0 items=0 ppid=2398 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:12.152000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:12.162691 kernel: audit: type=1300 audit(1752109992.152:286): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe8d432570 a2=0 a3=0 items=0 ppid=2398 pid=2670 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:12.199000 audit[2672]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:12.199000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe1541cd70 a2=0 a3=7ffe1541cd5c items=0 ppid=2398 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:12.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:12.204000 audit[2672]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2672 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:12.204000 audit[2672]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe1541cd70 a2=0 a3=0 items=0 ppid=2398 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:12.204000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:13.716000 audit[2674]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:13.716000 audit[2674]: SYSCALL arch=c000003e syscall=46 success=yes exit=6736 a0=3 a1=7ffe905d2d40 a2=0 a3=7ffe905d2d2c items=0 ppid=2398 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:13.716000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:13.721000 audit[2674]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2674 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:13.721000 audit[2674]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffe905d2d40 a2=0 a3=0 items=0 ppid=2398 pid=2674 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:13.721000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:14.117000 audit[2676]: NETFILTER_CFG table=filter:95 family=2 entries=20 op=nft_register_rule pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:14.117000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffc29627a30 a2=0 a3=7ffc29627a1c items=0 ppid=2398 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:14.117000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:14.122000 audit[2676]: NETFILTER_CFG table=nat:96 family=2 entries=12 op=nft_register_rule pid=2676 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:14.122000 audit[2676]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffc29627a30 a2=0 a3=0 items=0 ppid=2398 pid=2676 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:14.122000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:14.166969 kubelet[2299]: I0710 01:13:14.166935 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle\") pod \"calico-typha-66ddcf689b-z7vqm\" (UID: \"bb9848ea-740a-453f-b511-e75cc1983690\") " pod="calico-system/calico-typha-66ddcf689b-z7vqm" Jul 10 01:13:14.167309 kubelet[2299]: I0710 01:13:14.166977 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs\") pod \"calico-typha-66ddcf689b-z7vqm\" (UID: \"bb9848ea-740a-453f-b511-e75cc1983690\") " pod="calico-system/calico-typha-66ddcf689b-z7vqm" Jul 10 01:13:14.167309 kubelet[2299]: I0710 01:13:14.166992 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-988cw\" (UniqueName: \"kubernetes.io/projected/bb9848ea-740a-453f-b511-e75cc1983690-kube-api-access-988cw\") pod \"calico-typha-66ddcf689b-z7vqm\" (UID: \"bb9848ea-740a-453f-b511-e75cc1983690\") " pod="calico-system/calico-typha-66ddcf689b-z7vqm" Jul 10 01:13:14.450245 env[1363]: time="2025-07-10T01:13:14.450178927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66ddcf689b-z7vqm,Uid:bb9848ea-740a-453f-b511-e75cc1983690,Namespace:calico-system,Attempt:0,}" Jul 10 01:13:14.466700 env[1363]: time="2025-07-10T01:13:14.466564333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:14.466700 env[1363]: time="2025-07-10T01:13:14.466592370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:14.466700 env[1363]: time="2025-07-10T01:13:14.466599531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:14.466886 env[1363]: time="2025-07-10T01:13:14.466714018Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fbd7787e3ba7347a2dbc28cc06ce79390bd9946b44636e633481dc0bb5ca8f11 pid=2686 runtime=io.containerd.runc.v2 Jul 10 01:13:14.555966 env[1363]: time="2025-07-10T01:13:14.555942386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66ddcf689b-z7vqm,Uid:bb9848ea-740a-453f-b511-e75cc1983690,Namespace:calico-system,Attempt:0,} returns sandbox id \"fbd7787e3ba7347a2dbc28cc06ce79390bd9946b44636e633481dc0bb5ca8f11\"" Jul 10 01:13:14.556925 env[1363]: time="2025-07-10T01:13:14.556910378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 10 01:13:14.570511 kubelet[2299]: I0710 01:13:14.570478 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-flexvol-driver-host\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570511 kubelet[2299]: I0710 01:13:14.570507 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-var-lib-calico\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570651 kubelet[2299]: I0710 01:13:14.570522 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-lib-modules\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570651 kubelet[2299]: I0710 01:13:14.570533 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-xtables-lock\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570651 kubelet[2299]: I0710 01:13:14.570543 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p7c2\" (UniqueName: \"kubernetes.io/projected/6367e512-6f46-407d-94e1-a5c573185269-kube-api-access-8p7c2\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570651 kubelet[2299]: I0710 01:13:14.570551 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-var-run-calico\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570651 kubelet[2299]: I0710 01:13:14.570561 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-cni-log-dir\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570776 kubelet[2299]: I0710 01:13:14.570572 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-cni-net-dir\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570776 kubelet[2299]: I0710 01:13:14.570582 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570776 kubelet[2299]: I0710 01:13:14.570591 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570776 kubelet[2299]: I0710 01:13:14.570599 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-cni-bin-dir\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.570776 kubelet[2299]: I0710 01:13:14.570608 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6367e512-6f46-407d-94e1-a5c573185269-policysync\") pod \"calico-node-2k6z4\" (UID: \"6367e512-6f46-407d-94e1-a5c573185269\") " pod="calico-system/calico-node-2k6z4" Jul 10 01:13:14.677708 kubelet[2299]: E0710 01:13:14.677684 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.677809 kubelet[2299]: W0710 01:13:14.677702 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.677809 kubelet[2299]: E0710 01:13:14.677730 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.744190 kubelet[2299]: E0710 01:13:14.744113 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:14.766961 kubelet[2299]: E0710 01:13:14.766938 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.767065 kubelet[2299]: W0710 01:13:14.766954 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.767065 kubelet[2299]: E0710 01:13:14.766984 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.767192 kubelet[2299]: E0710 01:13:14.767183 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.767234 kubelet[2299]: W0710 01:13:14.767199 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.767234 kubelet[2299]: E0710 01:13:14.767208 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.767351 kubelet[2299]: E0710 01:13:14.767342 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.767383 kubelet[2299]: W0710 01:13:14.767359 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.767383 kubelet[2299]: E0710 01:13:14.767366 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.767496 kubelet[2299]: E0710 01:13:14.767487 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.767496 kubelet[2299]: W0710 01:13:14.767493 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.767553 kubelet[2299]: E0710 01:13:14.767499 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.767659 kubelet[2299]: E0710 01:13:14.767650 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.767659 kubelet[2299]: W0710 01:13:14.767658 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.767719 kubelet[2299]: E0710 01:13:14.767663 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.767816 kubelet[2299]: E0710 01:13:14.767807 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.767816 kubelet[2299]: W0710 01:13:14.767813 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.767876 kubelet[2299]: E0710 01:13:14.767819 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.767984 kubelet[2299]: E0710 01:13:14.767974 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.767984 kubelet[2299]: W0710 01:13:14.767981 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.768059 kubelet[2299]: E0710 01:13:14.767986 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.768160 kubelet[2299]: E0710 01:13:14.768151 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.768160 kubelet[2299]: W0710 01:13:14.768157 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.768218 kubelet[2299]: E0710 01:13:14.768163 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.768355 kubelet[2299]: E0710 01:13:14.768344 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.768355 kubelet[2299]: W0710 01:13:14.768352 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.768418 kubelet[2299]: E0710 01:13:14.768358 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.768511 kubelet[2299]: E0710 01:13:14.768502 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.768511 kubelet[2299]: W0710 01:13:14.768508 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.768568 kubelet[2299]: E0710 01:13:14.768513 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.768682 kubelet[2299]: E0710 01:13:14.768671 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.768720 kubelet[2299]: W0710 01:13:14.768681 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.768720 kubelet[2299]: E0710 01:13:14.768689 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.768849 kubelet[2299]: E0710 01:13:14.768837 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.768849 kubelet[2299]: W0710 01:13:14.768844 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.768849 kubelet[2299]: E0710 01:13:14.768849 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.769009 kubelet[2299]: E0710 01:13:14.769000 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.769009 kubelet[2299]: W0710 01:13:14.769007 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.770522 kubelet[2299]: E0710 01:13:14.769012 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.770522 kubelet[2299]: E0710 01:13:14.769205 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.770522 kubelet[2299]: W0710 01:13:14.769220 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.770522 kubelet[2299]: E0710 01:13:14.769227 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.770522 kubelet[2299]: E0710 01:13:14.769489 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.770522 kubelet[2299]: W0710 01:13:14.769497 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.770522 kubelet[2299]: E0710 01:13:14.769505 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.770522 kubelet[2299]: E0710 01:13:14.769616 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.770522 kubelet[2299]: W0710 01:13:14.769629 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.770522 kubelet[2299]: E0710 01:13:14.769653 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772115 kubelet[2299]: E0710 01:13:14.769784 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772115 kubelet[2299]: W0710 01:13:14.769789 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772115 kubelet[2299]: E0710 01:13:14.769794 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772115 kubelet[2299]: E0710 01:13:14.769903 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772115 kubelet[2299]: W0710 01:13:14.769908 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772115 kubelet[2299]: E0710 01:13:14.769914 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772115 kubelet[2299]: E0710 01:13:14.770018 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772115 kubelet[2299]: W0710 01:13:14.770023 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772115 kubelet[2299]: E0710 01:13:14.770040 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772115 kubelet[2299]: E0710 01:13:14.770141 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772297 kubelet[2299]: W0710 01:13:14.770149 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772297 kubelet[2299]: E0710 01:13:14.770156 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772297 kubelet[2299]: E0710 01:13:14.771314 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772297 kubelet[2299]: W0710 01:13:14.771323 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772297 kubelet[2299]: E0710 01:13:14.771331 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772297 kubelet[2299]: I0710 01:13:14.771348 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c15a8f19-7056-4133-9713-c590210e2422-kubelet-dir\") pod \"csi-node-driver-b48c6\" (UID: \"c15a8f19-7056-4133-9713-c590210e2422\") " pod="calico-system/csi-node-driver-b48c6" Jul 10 01:13:14.772297 kubelet[2299]: E0710 01:13:14.771453 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772297 kubelet[2299]: W0710 01:13:14.771466 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772297 kubelet[2299]: E0710 01:13:14.771474 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772464 kubelet[2299]: I0710 01:13:14.771485 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c15a8f19-7056-4133-9713-c590210e2422-varrun\") pod \"csi-node-driver-b48c6\" (UID: \"c15a8f19-7056-4133-9713-c590210e2422\") " pod="calico-system/csi-node-driver-b48c6" Jul 10 01:13:14.772464 kubelet[2299]: E0710 01:13:14.771689 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772464 kubelet[2299]: W0710 01:13:14.771697 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772464 kubelet[2299]: E0710 01:13:14.771712 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.772464 kubelet[2299]: I0710 01:13:14.771731 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj7lc\" (UniqueName: \"kubernetes.io/projected/c15a8f19-7056-4133-9713-c590210e2422-kube-api-access-jj7lc\") pod \"csi-node-driver-b48c6\" (UID: \"c15a8f19-7056-4133-9713-c590210e2422\") " pod="calico-system/csi-node-driver-b48c6" Jul 10 01:13:14.772647 kubelet[2299]: E0710 01:13:14.772627 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.772688 kubelet[2299]: W0710 01:13:14.772647 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.772688 kubelet[2299]: E0710 01:13:14.772660 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.773468 kubelet[2299]: E0710 01:13:14.773456 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.773468 kubelet[2299]: W0710 01:13:14.773467 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.773567 kubelet[2299]: E0710 01:13:14.773554 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.773723 kubelet[2299]: E0710 01:13:14.773712 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.773779 kubelet[2299]: W0710 01:13:14.773721 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.773825 kubelet[2299]: E0710 01:13:14.773815 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.773940 kubelet[2299]: E0710 01:13:14.773926 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.773940 kubelet[2299]: W0710 01:13:14.773938 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.774033 kubelet[2299]: E0710 01:13:14.774024 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.774088 kubelet[2299]: I0710 01:13:14.774080 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c15a8f19-7056-4133-9713-c590210e2422-registration-dir\") pod \"csi-node-driver-b48c6\" (UID: \"c15a8f19-7056-4133-9713-c590210e2422\") " pod="calico-system/csi-node-driver-b48c6" Jul 10 01:13:14.774188 kubelet[2299]: E0710 01:13:14.774179 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.774188 kubelet[2299]: W0710 01:13:14.774187 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.774265 kubelet[2299]: E0710 01:13:14.774256 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.774372 kubelet[2299]: E0710 01:13:14.774364 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.774372 kubelet[2299]: W0710 01:13:14.774370 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.774433 kubelet[2299]: E0710 01:13:14.774376 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.774697 kubelet[2299]: E0710 01:13:14.774686 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.774736 kubelet[2299]: W0710 01:13:14.774697 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.774736 kubelet[2299]: E0710 01:13:14.774709 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.774736 kubelet[2299]: I0710 01:13:14.774720 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c15a8f19-7056-4133-9713-c590210e2422-socket-dir\") pod \"csi-node-driver-b48c6\" (UID: \"c15a8f19-7056-4133-9713-c590210e2422\") " pod="calico-system/csi-node-driver-b48c6" Jul 10 01:13:14.774897 kubelet[2299]: E0710 01:13:14.774884 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.774940 kubelet[2299]: W0710 01:13:14.774895 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.774940 kubelet[2299]: E0710 01:13:14.774907 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.775093 kubelet[2299]: E0710 01:13:14.775081 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.775093 kubelet[2299]: W0710 01:13:14.775089 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.775183 kubelet[2299]: E0710 01:13:14.775097 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.775579 kubelet[2299]: E0710 01:13:14.775558 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.775579 kubelet[2299]: W0710 01:13:14.775566 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.775579 kubelet[2299]: E0710 01:13:14.775574 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.775926 kubelet[2299]: E0710 01:13:14.775905 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.775926 kubelet[2299]: W0710 01:13:14.775915 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.775926 kubelet[2299]: E0710 01:13:14.775924 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.776945 kubelet[2299]: E0710 01:13:14.776604 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.776945 kubelet[2299]: W0710 01:13:14.776612 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.776945 kubelet[2299]: E0710 01:13:14.776633 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.821858 env[1363]: time="2025-07-10T01:13:14.821826028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k6z4,Uid:6367e512-6f46-407d-94e1-a5c573185269,Namespace:calico-system,Attempt:0,}" Jul 10 01:13:14.841946 env[1363]: time="2025-07-10T01:13:14.841684655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:14.841946 env[1363]: time="2025-07-10T01:13:14.841739288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:14.841946 env[1363]: time="2025-07-10T01:13:14.841748084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:14.842132 env[1363]: time="2025-07-10T01:13:14.841985032Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1 pid=2776 runtime=io.containerd.runc.v2 Jul 10 01:13:14.866947 env[1363]: time="2025-07-10T01:13:14.866915778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2k6z4,Uid:6367e512-6f46-407d-94e1-a5c573185269,Namespace:calico-system,Attempt:0,} returns sandbox id \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\"" Jul 10 01:13:14.877959 kubelet[2299]: E0710 01:13:14.877938 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.877959 kubelet[2299]: W0710 01:13:14.877953 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878094 kubelet[2299]: E0710 01:13:14.877967 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878094 kubelet[2299]: E0710 01:13:14.878066 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878094 kubelet[2299]: W0710 01:13:14.878071 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878094 kubelet[2299]: E0710 01:13:14.878077 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878177 kubelet[2299]: E0710 01:13:14.878156 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878177 kubelet[2299]: W0710 01:13:14.878161 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878177 kubelet[2299]: E0710 01:13:14.878165 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878251 kubelet[2299]: E0710 01:13:14.878240 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878251 kubelet[2299]: W0710 01:13:14.878247 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878303 kubelet[2299]: E0710 01:13:14.878252 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878351 kubelet[2299]: E0710 01:13:14.878341 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878351 kubelet[2299]: W0710 01:13:14.878348 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878403 kubelet[2299]: E0710 01:13:14.878354 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878445 kubelet[2299]: E0710 01:13:14.878436 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878445 kubelet[2299]: W0710 01:13:14.878442 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878502 kubelet[2299]: E0710 01:13:14.878448 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878526 kubelet[2299]: E0710 01:13:14.878516 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878526 kubelet[2299]: W0710 01:13:14.878520 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878526 kubelet[2299]: E0710 01:13:14.878524 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878599 kubelet[2299]: E0710 01:13:14.878589 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878599 kubelet[2299]: W0710 01:13:14.878595 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878599 kubelet[2299]: E0710 01:13:14.878601 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.878825 kubelet[2299]: E0710 01:13:14.878815 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.878879 kubelet[2299]: W0710 01:13:14.878870 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.878938 kubelet[2299]: E0710 01:13:14.878929 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.879069 kubelet[2299]: E0710 01:13:14.879063 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.879123 kubelet[2299]: W0710 01:13:14.879115 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.879175 kubelet[2299]: E0710 01:13:14.879165 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.879305 kubelet[2299]: E0710 01:13:14.879298 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.879352 kubelet[2299]: W0710 01:13:14.879343 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.879405 kubelet[2299]: E0710 01:13:14.879397 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.879561 kubelet[2299]: E0710 01:13:14.879554 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.879605 kubelet[2299]: W0710 01:13:14.879597 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.879675 kubelet[2299]: E0710 01:13:14.879667 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.879841 kubelet[2299]: E0710 01:13:14.879832 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.879893 kubelet[2299]: W0710 01:13:14.879883 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.884570 kubelet[2299]: E0710 01:13:14.879956 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.884570 kubelet[2299]: E0710 01:13:14.880065 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.884570 kubelet[2299]: W0710 01:13:14.880072 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.884570 kubelet[2299]: E0710 01:13:14.880080 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.884570 kubelet[2299]: E0710 01:13:14.880168 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.884570 kubelet[2299]: W0710 01:13:14.880173 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.884570 kubelet[2299]: E0710 01:13:14.880178 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.884570 kubelet[2299]: E0710 01:13:14.880294 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.884570 kubelet[2299]: W0710 01:13:14.880299 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.884570 kubelet[2299]: E0710 01:13:14.880304 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.884851 kubelet[2299]: E0710 01:13:14.880391 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.884851 kubelet[2299]: W0710 01:13:14.880395 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.884851 kubelet[2299]: E0710 01:13:14.880401 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.884851 kubelet[2299]: E0710 01:13:14.880474 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.884851 kubelet[2299]: W0710 01:13:14.880479 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.884851 kubelet[2299]: E0710 01:13:14.880483 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.884851 kubelet[2299]: E0710 01:13:14.880561 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.884851 kubelet[2299]: W0710 01:13:14.880565 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.884851 kubelet[2299]: E0710 01:13:14.880569 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.884851 kubelet[2299]: E0710 01:13:14.880679 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.885042 kubelet[2299]: W0710 01:13:14.880684 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.885042 kubelet[2299]: E0710 01:13:14.880689 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.885042 kubelet[2299]: E0710 01:13:14.880882 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.885042 kubelet[2299]: W0710 01:13:14.880887 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.885042 kubelet[2299]: E0710 01:13:14.880893 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.885042 kubelet[2299]: E0710 01:13:14.880963 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.885042 kubelet[2299]: W0710 01:13:14.880968 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.885042 kubelet[2299]: E0710 01:13:14.880973 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.885042 kubelet[2299]: E0710 01:13:14.881043 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.885042 kubelet[2299]: W0710 01:13:14.881047 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.885238 kubelet[2299]: E0710 01:13:14.881052 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.885238 kubelet[2299]: E0710 01:13:14.881136 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.885238 kubelet[2299]: W0710 01:13:14.881141 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.885238 kubelet[2299]: E0710 01:13:14.881146 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.886126 kubelet[2299]: E0710 01:13:14.886118 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.886182 kubelet[2299]: W0710 01:13:14.886172 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.886234 kubelet[2299]: E0710 01:13:14.886225 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:14.887295 kubelet[2299]: E0710 01:13:14.887288 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:14.887345 kubelet[2299]: W0710 01:13:14.887336 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:14.887407 kubelet[2299]: E0710 01:13:14.887399 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:15.133000 audit[2838]: NETFILTER_CFG table=filter:97 family=2 entries=22 op=nft_register_rule pid=2838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:15.133000 audit[2838]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd527c5df0 a2=0 a3=7ffd527c5ddc items=0 ppid=2398 pid=2838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:15.133000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:15.135000 audit[2838]: NETFILTER_CFG table=nat:98 family=2 entries=12 op=nft_register_rule pid=2838 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:15.135000 audit[2838]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd527c5df0 a2=0 a3=0 items=0 ppid=2398 pid=2838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:15.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:15.273465 systemd[1]: run-containerd-runc-k8s.io-fbd7787e3ba7347a2dbc28cc06ce79390bd9946b44636e633481dc0bb5ca8f11-runc.vAD1Vr.mount: Deactivated successfully. Jul 10 01:13:16.375121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951583504.mount: Deactivated successfully. Jul 10 01:13:16.495133 kubelet[2299]: E0710 01:13:16.494741 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:17.351383 env[1363]: time="2025-07-10T01:13:17.351341067Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:17.364600 env[1363]: time="2025-07-10T01:13:17.364567645Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:17.370500 env[1363]: time="2025-07-10T01:13:17.370479498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:17.378493 env[1363]: time="2025-07-10T01:13:17.378470630Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:17.386909 env[1363]: time="2025-07-10T01:13:17.378734559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 10 01:13:17.386909 env[1363]: time="2025-07-10T01:13:17.379942732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 10 01:13:17.393387 env[1363]: time="2025-07-10T01:13:17.393360284Z" level=info msg="CreateContainer within sandbox \"fbd7787e3ba7347a2dbc28cc06ce79390bd9946b44636e633481dc0bb5ca8f11\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 10 01:13:17.420111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219180602.mount: Deactivated successfully. Jul 10 01:13:17.429094 env[1363]: time="2025-07-10T01:13:17.429067700Z" level=info msg="CreateContainer within sandbox \"fbd7787e3ba7347a2dbc28cc06ce79390bd9946b44636e633481dc0bb5ca8f11\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21\"" Jul 10 01:13:17.429683 env[1363]: time="2025-07-10T01:13:17.429665973Z" level=info msg="StartContainer for \"9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21\"" Jul 10 01:13:17.531485 env[1363]: time="2025-07-10T01:13:17.531458430Z" level=info msg="StartContainer for \"9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21\" returns successfully" Jul 10 01:13:17.589007 kubelet[2299]: E0710 01:13:17.588698 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.589007 kubelet[2299]: W0710 01:13:17.588714 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.589007 kubelet[2299]: E0710 01:13:17.588737 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.589007 kubelet[2299]: E0710 01:13:17.588856 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.589007 kubelet[2299]: W0710 01:13:17.588861 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.589007 kubelet[2299]: E0710 01:13:17.588866 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.589007 kubelet[2299]: E0710 01:13:17.588951 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.589007 kubelet[2299]: W0710 01:13:17.588955 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.589007 kubelet[2299]: E0710 01:13:17.588960 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.589892 kubelet[2299]: E0710 01:13:17.589512 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.589892 kubelet[2299]: W0710 01:13:17.589518 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.589892 kubelet[2299]: E0710 01:13:17.589525 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.589892 kubelet[2299]: E0710 01:13:17.589619 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.589892 kubelet[2299]: W0710 01:13:17.589623 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.589892 kubelet[2299]: E0710 01:13:17.589629 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.589892 kubelet[2299]: E0710 01:13:17.589715 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.589892 kubelet[2299]: W0710 01:13:17.589719 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.589892 kubelet[2299]: E0710 01:13:17.589724 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.589892 kubelet[2299]: E0710 01:13:17.589806 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.590116 kubelet[2299]: W0710 01:13:17.589811 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.590116 kubelet[2299]: E0710 01:13:17.589816 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.590438 kubelet[2299]: E0710 01:13:17.590187 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.590438 kubelet[2299]: W0710 01:13:17.590194 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.590438 kubelet[2299]: E0710 01:13:17.590199 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.590438 kubelet[2299]: E0710 01:13:17.590291 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.590438 kubelet[2299]: W0710 01:13:17.590296 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.590438 kubelet[2299]: E0710 01:13:17.590301 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.590438 kubelet[2299]: E0710 01:13:17.590385 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.590438 kubelet[2299]: W0710 01:13:17.590389 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.590438 kubelet[2299]: E0710 01:13:17.590394 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.590848 kubelet[2299]: E0710 01:13:17.590696 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.590848 kubelet[2299]: W0710 01:13:17.590702 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.590848 kubelet[2299]: E0710 01:13:17.590713 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.590848 kubelet[2299]: E0710 01:13:17.590794 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.590848 kubelet[2299]: W0710 01:13:17.590799 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.590848 kubelet[2299]: E0710 01:13:17.590804 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.591114 kubelet[2299]: E0710 01:13:17.591034 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.591114 kubelet[2299]: W0710 01:13:17.591040 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.591114 kubelet[2299]: E0710 01:13:17.591045 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.591276 kubelet[2299]: E0710 01:13:17.591230 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.591276 kubelet[2299]: W0710 01:13:17.591236 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.591276 kubelet[2299]: E0710 01:13:17.591241 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.591452 kubelet[2299]: E0710 01:13:17.591406 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.591452 kubelet[2299]: W0710 01:13:17.591412 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.591452 kubelet[2299]: E0710 01:13:17.591417 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.597602 kubelet[2299]: E0710 01:13:17.597587 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.597720 kubelet[2299]: W0710 01:13:17.597709 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.597787 kubelet[2299]: E0710 01:13:17.597777 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.598009 kubelet[2299]: E0710 01:13:17.598002 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.598066 kubelet[2299]: W0710 01:13:17.598058 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.598126 kubelet[2299]: E0710 01:13:17.598112 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.598273 kubelet[2299]: E0710 01:13:17.598266 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.598324 kubelet[2299]: W0710 01:13:17.598315 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.598388 kubelet[2299]: E0710 01:13:17.598379 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.598513 kubelet[2299]: E0710 01:13:17.598502 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.598513 kubelet[2299]: W0710 01:13:17.598511 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.598586 kubelet[2299]: E0710 01:13:17.598522 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.598701 kubelet[2299]: E0710 01:13:17.598691 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.598740 kubelet[2299]: W0710 01:13:17.598700 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.598740 kubelet[2299]: E0710 01:13:17.598713 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.598816 kubelet[2299]: E0710 01:13:17.598808 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.598816 kubelet[2299]: W0710 01:13:17.598814 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.598878 kubelet[2299]: E0710 01:13:17.598822 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.598976 kubelet[2299]: E0710 01:13:17.598966 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.598976 kubelet[2299]: W0710 01:13:17.598974 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.599038 kubelet[2299]: E0710 01:13:17.598980 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.609358 kubelet[2299]: E0710 01:13:17.609291 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.609358 kubelet[2299]: W0710 01:13:17.609309 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.609358 kubelet[2299]: E0710 01:13:17.609329 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.609606 kubelet[2299]: E0710 01:13:17.609599 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.609661 kubelet[2299]: W0710 01:13:17.609652 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.609751 kubelet[2299]: E0710 01:13:17.609708 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.609860 kubelet[2299]: E0710 01:13:17.609853 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.609910 kubelet[2299]: W0710 01:13:17.609902 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.609990 kubelet[2299]: E0710 01:13:17.609984 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.610083 kubelet[2299]: E0710 01:13:17.610077 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.610127 kubelet[2299]: W0710 01:13:17.610119 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.610209 kubelet[2299]: E0710 01:13:17.610201 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.610304 kubelet[2299]: E0710 01:13:17.610298 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.610350 kubelet[2299]: W0710 01:13:17.610342 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.610400 kubelet[2299]: E0710 01:13:17.610393 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.610530 kubelet[2299]: E0710 01:13:17.610524 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.610577 kubelet[2299]: W0710 01:13:17.610569 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.610627 kubelet[2299]: E0710 01:13:17.610619 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.610762 kubelet[2299]: E0710 01:13:17.610756 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.610810 kubelet[2299]: W0710 01:13:17.610802 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.610857 kubelet[2299]: E0710 01:13:17.610850 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.610993 kubelet[2299]: E0710 01:13:17.610987 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.611041 kubelet[2299]: W0710 01:13:17.611032 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.611088 kubelet[2299]: E0710 01:13:17.611080 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.612582 kubelet[2299]: E0710 01:13:17.612575 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.612637 kubelet[2299]: W0710 01:13:17.612628 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.612699 kubelet[2299]: E0710 01:13:17.612691 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.612898 kubelet[2299]: E0710 01:13:17.612891 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.612948 kubelet[2299]: W0710 01:13:17.612939 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.613001 kubelet[2299]: E0710 01:13:17.612992 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.613147 kubelet[2299]: E0710 01:13:17.613141 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:17.613197 kubelet[2299]: W0710 01:13:17.613187 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:17.613246 kubelet[2299]: E0710 01:13:17.613238 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:17.628806 kubelet[2299]: I0710 01:13:17.628764 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66ddcf689b-z7vqm" podStartSLOduration=0.806281768 podStartE2EDuration="3.628753356s" podCreationTimestamp="2025-07-10 01:13:14 +0000 UTC" firstStartedPulling="2025-07-10 01:13:14.556775256 +0000 UTC m=+14.425202135" lastFinishedPulling="2025-07-10 01:13:17.379246834 +0000 UTC m=+17.247673723" observedRunningTime="2025-07-10 01:13:17.626624309 +0000 UTC m=+17.495051200" watchObservedRunningTime="2025-07-10 01:13:17.628753356 +0000 UTC m=+17.497180243" Jul 10 01:13:18.495229 kubelet[2299]: E0710 01:13:18.495144 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:18.569018 kubelet[2299]: I0710 01:13:18.568991 2299 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 01:13:18.595782 kubelet[2299]: E0710 01:13:18.595756 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.595782 kubelet[2299]: W0710 01:13:18.595774 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596159 kubelet[2299]: E0710 01:13:18.595791 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596159 kubelet[2299]: E0710 01:13:18.595959 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.596159 kubelet[2299]: W0710 01:13:18.595966 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596159 kubelet[2299]: E0710 01:13:18.595975 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596159 kubelet[2299]: E0710 01:13:18.596079 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.596159 kubelet[2299]: W0710 01:13:18.596085 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596159 kubelet[2299]: E0710 01:13:18.596098 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596456 kubelet[2299]: E0710 01:13:18.596209 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.596456 kubelet[2299]: W0710 01:13:18.596222 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596456 kubelet[2299]: E0710 01:13:18.596231 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596456 kubelet[2299]: E0710 01:13:18.596343 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.596456 kubelet[2299]: W0710 01:13:18.596349 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596456 kubelet[2299]: E0710 01:13:18.596363 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596675 kubelet[2299]: E0710 01:13:18.596468 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.596675 kubelet[2299]: W0710 01:13:18.596474 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596675 kubelet[2299]: E0710 01:13:18.596488 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596675 kubelet[2299]: E0710 01:13:18.596591 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.596675 kubelet[2299]: W0710 01:13:18.596598 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596675 kubelet[2299]: E0710 01:13:18.596611 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596879 kubelet[2299]: E0710 01:13:18.596731 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.596879 kubelet[2299]: W0710 01:13:18.596738 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.596879 kubelet[2299]: E0710 01:13:18.596753 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.596879 kubelet[2299]: E0710 01:13:18.596874 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.597023 kubelet[2299]: W0710 01:13:18.596882 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.597023 kubelet[2299]: E0710 01:13:18.596897 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.597023 kubelet[2299]: E0710 01:13:18.597009 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.597127 kubelet[2299]: W0710 01:13:18.597022 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.597127 kubelet[2299]: E0710 01:13:18.597032 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.597201 kubelet[2299]: E0710 01:13:18.597136 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.597201 kubelet[2299]: W0710 01:13:18.597143 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.597201 kubelet[2299]: E0710 01:13:18.597157 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.597301 kubelet[2299]: E0710 01:13:18.597260 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.597301 kubelet[2299]: W0710 01:13:18.597266 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.597301 kubelet[2299]: E0710 01:13:18.597279 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.597400 kubelet[2299]: E0710 01:13:18.597389 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.597400 kubelet[2299]: W0710 01:13:18.597395 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.597467 kubelet[2299]: E0710 01:13:18.597408 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.597526 kubelet[2299]: E0710 01:13:18.597511 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.597526 kubelet[2299]: W0710 01:13:18.597519 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.597604 kubelet[2299]: E0710 01:13:18.597532 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.597649 kubelet[2299]: E0710 01:13:18.597637 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.597689 kubelet[2299]: W0710 01:13:18.597652 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.597689 kubelet[2299]: E0710 01:13:18.597661 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.605056 kubelet[2299]: E0710 01:13:18.605032 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.605056 kubelet[2299]: W0710 01:13:18.605052 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.605175 kubelet[2299]: E0710 01:13:18.605069 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.605260 kubelet[2299]: E0710 01:13:18.605247 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.605260 kubelet[2299]: W0710 01:13:18.605264 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.605318 kubelet[2299]: E0710 01:13:18.605275 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.605411 kubelet[2299]: E0710 01:13:18.605397 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.605448 kubelet[2299]: W0710 01:13:18.605415 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.605448 kubelet[2299]: E0710 01:13:18.605425 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.605568 kubelet[2299]: E0710 01:13:18.605557 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.605600 kubelet[2299]: W0710 01:13:18.605575 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.605600 kubelet[2299]: E0710 01:13:18.605587 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.605726 kubelet[2299]: E0710 01:13:18.605714 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.605726 kubelet[2299]: W0710 01:13:18.605725 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.605781 kubelet[2299]: E0710 01:13:18.605732 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.605864 kubelet[2299]: E0710 01:13:18.605848 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.605901 kubelet[2299]: W0710 01:13:18.605864 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.605901 kubelet[2299]: E0710 01:13:18.605875 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.606012 kubelet[2299]: E0710 01:13:18.605999 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.606012 kubelet[2299]: W0710 01:13:18.606008 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.606062 kubelet[2299]: E0710 01:13:18.606021 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.606190 kubelet[2299]: E0710 01:13:18.606180 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.606236 kubelet[2299]: W0710 01:13:18.606227 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.606291 kubelet[2299]: E0710 01:13:18.606282 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.606412 kubelet[2299]: E0710 01:13:18.606397 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.606448 kubelet[2299]: W0710 01:13:18.606415 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.606448 kubelet[2299]: E0710 01:13:18.606429 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.606551 kubelet[2299]: E0710 01:13:18.606541 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.606581 kubelet[2299]: W0710 01:13:18.606557 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.606581 kubelet[2299]: E0710 01:13:18.606570 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.606714 kubelet[2299]: E0710 01:13:18.606701 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.606747 kubelet[2299]: W0710 01:13:18.606713 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.606747 kubelet[2299]: E0710 01:13:18.606725 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.606869 kubelet[2299]: E0710 01:13:18.606856 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.606903 kubelet[2299]: W0710 01:13:18.606865 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.606903 kubelet[2299]: E0710 01:13:18.606880 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.607036 kubelet[2299]: E0710 01:13:18.607029 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.607079 kubelet[2299]: W0710 01:13:18.607071 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.607133 kubelet[2299]: E0710 01:13:18.607125 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.607256 kubelet[2299]: E0710 01:13:18.607240 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.607256 kubelet[2299]: W0710 01:13:18.607255 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.607312 kubelet[2299]: E0710 01:13:18.607266 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.607390 kubelet[2299]: E0710 01:13:18.607378 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.607390 kubelet[2299]: W0710 01:13:18.607388 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.607441 kubelet[2299]: E0710 01:13:18.607397 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.607533 kubelet[2299]: E0710 01:13:18.607521 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.607561 kubelet[2299]: W0710 01:13:18.607531 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.607561 kubelet[2299]: E0710 01:13:18.607547 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.607700 kubelet[2299]: E0710 01:13:18.607693 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.609078 kubelet[2299]: W0710 01:13:18.607775 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.609078 kubelet[2299]: E0710 01:13:18.607788 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:18.609078 kubelet[2299]: E0710 01:13:18.607911 2299 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 10 01:13:18.609078 kubelet[2299]: W0710 01:13:18.607916 2299 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 10 01:13:18.609078 kubelet[2299]: E0710 01:13:18.607922 2299 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 10 01:13:19.589727 env[1363]: time="2025-07-10T01:13:19.589699353Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:19.594102 env[1363]: time="2025-07-10T01:13:19.594078959Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:19.597062 env[1363]: time="2025-07-10T01:13:19.597031729Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:19.600136 env[1363]: time="2025-07-10T01:13:19.600116904Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:19.600376 env[1363]: time="2025-07-10T01:13:19.600360131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 10 01:13:19.603053 env[1363]: time="2025-07-10T01:13:19.603029597Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 10 01:13:19.617336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2831423613.mount: Deactivated successfully. Jul 10 01:13:19.627904 env[1363]: time="2025-07-10T01:13:19.627866034Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"948f8200ab9e75880b131dff9f8134fc1cc78439f9e9f2bde2f0a6769d95f420\"" Jul 10 01:13:19.629796 env[1363]: time="2025-07-10T01:13:19.629777329Z" level=info msg="StartContainer for \"948f8200ab9e75880b131dff9f8134fc1cc78439f9e9f2bde2f0a6769d95f420\"" Jul 10 01:13:19.663227 systemd[1]: run-containerd-runc-k8s.io-948f8200ab9e75880b131dff9f8134fc1cc78439f9e9f2bde2f0a6769d95f420-runc.dL4KdI.mount: Deactivated successfully. Jul 10 01:13:19.699215 env[1363]: time="2025-07-10T01:13:19.699190258Z" level=info msg="StartContainer for \"948f8200ab9e75880b131dff9f8134fc1cc78439f9e9f2bde2f0a6769d95f420\" returns successfully" Jul 10 01:13:19.854746 env[1363]: time="2025-07-10T01:13:19.854653762Z" level=info msg="shim disconnected" id=948f8200ab9e75880b131dff9f8134fc1cc78439f9e9f2bde2f0a6769d95f420 Jul 10 01:13:19.854746 env[1363]: time="2025-07-10T01:13:19.854691813Z" level=warning msg="cleaning up after shim disconnected" id=948f8200ab9e75880b131dff9f8134fc1cc78439f9e9f2bde2f0a6769d95f420 namespace=k8s.io Jul 10 01:13:19.854746 env[1363]: time="2025-07-10T01:13:19.854699326Z" level=info msg="cleaning up dead shim" Jul 10 01:13:19.860447 env[1363]: time="2025-07-10T01:13:19.860415994Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:13:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2996 runtime=io.containerd.runc.v2\n" Jul 10 01:13:20.495238 kubelet[2299]: E0710 01:13:20.495217 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:20.573482 env[1363]: time="2025-07-10T01:13:20.572801819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 10 01:13:20.616050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-948f8200ab9e75880b131dff9f8134fc1cc78439f9e9f2bde2f0a6769d95f420-rootfs.mount: Deactivated successfully. Jul 10 01:13:22.495030 kubelet[2299]: E0710 01:13:22.494802 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:24.495459 kubelet[2299]: E0710 01:13:24.495242 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:25.480122 env[1363]: time="2025-07-10T01:13:25.480090235Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:25.509467 env[1363]: time="2025-07-10T01:13:25.509416592Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:25.529666 env[1363]: time="2025-07-10T01:13:25.529630710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:25.549329 env[1363]: time="2025-07-10T01:13:25.549293125Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:25.549740 env[1363]: time="2025-07-10T01:13:25.549716755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 10 01:13:25.703333 env[1363]: time="2025-07-10T01:13:25.703301852Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 10 01:13:25.743295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1732276213.mount: Deactivated successfully. Jul 10 01:13:25.775157 env[1363]: time="2025-07-10T01:13:25.775123090Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b21589bd2875991e61240fe06b2212cdd85ba4f4ea5f019f5755442838e529dd\"" Jul 10 01:13:25.776086 env[1363]: time="2025-07-10T01:13:25.776070507Z" level=info msg="StartContainer for \"b21589bd2875991e61240fe06b2212cdd85ba4f4ea5f019f5755442838e529dd\"" Jul 10 01:13:25.828099 env[1363]: time="2025-07-10T01:13:25.828068378Z" level=info msg="StartContainer for \"b21589bd2875991e61240fe06b2212cdd85ba4f4ea5f019f5755442838e529dd\" returns successfully" Jul 10 01:13:26.494521 kubelet[2299]: E0710 01:13:26.494485 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:28.494911 kubelet[2299]: E0710 01:13:28.494875 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:29.149851 env[1363]: time="2025-07-10T01:13:29.149798346Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 10 01:13:29.165615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b21589bd2875991e61240fe06b2212cdd85ba4f4ea5f019f5755442838e529dd-rootfs.mount: Deactivated successfully. Jul 10 01:13:29.186238 kubelet[2299]: I0710 01:13:29.179775 2299 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 10 01:13:29.189318 env[1363]: time="2025-07-10T01:13:29.189289189Z" level=info msg="shim disconnected" id=b21589bd2875991e61240fe06b2212cdd85ba4f4ea5f019f5755442838e529dd Jul 10 01:13:29.189434 env[1363]: time="2025-07-10T01:13:29.189420730Z" level=warning msg="cleaning up after shim disconnected" id=b21589bd2875991e61240fe06b2212cdd85ba4f4ea5f019f5755442838e529dd namespace=k8s.io Jul 10 01:13:29.189489 env[1363]: time="2025-07-10T01:13:29.189477238Z" level=info msg="cleaning up dead shim" Jul 10 01:13:29.198735 env[1363]: time="2025-07-10T01:13:29.198707193Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:13:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3066 runtime=io.containerd.runc.v2\n" Jul 10 01:13:29.372945 kubelet[2299]: I0710 01:13:29.372918 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpnqs\" (UniqueName: \"kubernetes.io/projected/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-kube-api-access-rpnqs\") pod \"calico-kube-controllers-5477ff879d-j2p5q\" (UID: \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\") " pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" Jul 10 01:13:29.373110 kubelet[2299]: I0710 01:13:29.373095 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs\") pod \"calico-apiserver-6d44674bc4-w2f48\" (UID: \"8e8146e9-6407-49b7-8cef-e26dac385734\") " pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" Jul 10 01:13:29.373212 kubelet[2299]: I0710 01:13:29.373198 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle\") pod \"calico-kube-controllers-5477ff879d-j2p5q\" (UID: \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\") " pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" Jul 10 01:13:29.373281 kubelet[2299]: I0710 01:13:29.373270 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5vvj\" (UniqueName: \"kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj\") pod \"calico-apiserver-6d44674bc4-w2f48\" (UID: \"8e8146e9-6407-49b7-8cef-e26dac385734\") " pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" Jul 10 01:13:29.479004 kubelet[2299]: I0710 01:13:29.478938 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47zqf\" (UniqueName: \"kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf\") pod \"calico-apiserver-6d44674bc4-b2wqb\" (UID: \"74cf1bc5-5d5a-4dc7-850a-71013984af05\") " pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" Jul 10 01:13:29.479139 kubelet[2299]: I0710 01:13:29.479127 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-backend-key-pair\") pod \"whisker-66c5d4d86b-jc5cs\" (UID: \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\") " pod="calico-system/whisker-66c5d4d86b-jc5cs" Jul 10 01:13:29.479228 kubelet[2299]: I0710 01:13:29.479216 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-zxwst\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " pod="calico-system/goldmane-58fd7646b9-zxwst" Jul 10 01:13:29.483155 kubelet[2299]: I0710 01:13:29.479310 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6qsp\" (UniqueName: \"kubernetes.io/projected/ced04dc5-79ee-4a07-a568-b0fd4007f64c-kube-api-access-k6qsp\") pod \"goldmane-58fd7646b9-zxwst\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " pod="calico-system/goldmane-58fd7646b9-zxwst" Jul 10 01:13:29.483155 kubelet[2299]: I0710 01:13:29.479324 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bl2z\" (UniqueName: \"kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z\") pod \"coredns-7c65d6cfc9-4k5ld\" (UID: \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\") " pod="kube-system/coredns-7c65d6cfc9-4k5ld" Jul 10 01:13:29.483155 kubelet[2299]: I0710 01:13:29.479337 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwvqb\" (UniqueName: \"kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb\") pod \"coredns-7c65d6cfc9-snhl5\" (UID: \"3459c244-a1ae-43bc-ad86-239a6e665757\") " pod="kube-system/coredns-7c65d6cfc9-snhl5" Jul 10 01:13:29.483155 kubelet[2299]: I0710 01:13:29.479347 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs\") pod \"calico-apiserver-6d44674bc4-b2wqb\" (UID: \"74cf1bc5-5d5a-4dc7-850a-71013984af05\") " pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" Jul 10 01:13:29.483155 kubelet[2299]: I0710 01:13:29.479362 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-ca-bundle\") pod \"whisker-66c5d4d86b-jc5cs\" (UID: \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\") " pod="calico-system/whisker-66c5d4d86b-jc5cs" Jul 10 01:13:29.483345 kubelet[2299]: I0710 01:13:29.479381 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume\") pod \"coredns-7c65d6cfc9-snhl5\" (UID: \"3459c244-a1ae-43bc-ad86-239a6e665757\") " pod="kube-system/coredns-7c65d6cfc9-snhl5" Jul 10 01:13:29.483345 kubelet[2299]: I0710 01:13:29.479394 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume\") pod \"coredns-7c65d6cfc9-4k5ld\" (UID: \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\") " pod="kube-system/coredns-7c65d6cfc9-4k5ld" Jul 10 01:13:29.483345 kubelet[2299]: I0710 01:13:29.479416 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config\") pod \"goldmane-58fd7646b9-zxwst\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " pod="calico-system/goldmane-58fd7646b9-zxwst" Jul 10 01:13:29.483345 kubelet[2299]: I0710 01:13:29.479428 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v5rc\" (UniqueName: \"kubernetes.io/projected/389b9b23-8476-4f37-b3d0-fe7b86da7d65-kube-api-access-2v5rc\") pod \"whisker-66c5d4d86b-jc5cs\" (UID: \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\") " pod="calico-system/whisker-66c5d4d86b-jc5cs" Jul 10 01:13:29.483345 kubelet[2299]: I0710 01:13:29.479440 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair\") pod \"goldmane-58fd7646b9-zxwst\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " pod="calico-system/goldmane-58fd7646b9-zxwst" Jul 10 01:13:29.604028 env[1363]: time="2025-07-10T01:13:29.604002323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 10 01:13:29.801027 env[1363]: time="2025-07-10T01:13:29.800854503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5477ff879d-j2p5q,Uid:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc,Namespace:calico-system,Attempt:0,}" Jul 10 01:13:29.801027 env[1363]: time="2025-07-10T01:13:29.800892328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4k5ld,Uid:a29ef6dc-4246-436d-87dd-9c8e96247aeb,Namespace:kube-system,Attempt:0,}" Jul 10 01:13:29.801340 env[1363]: time="2025-07-10T01:13:29.801257677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-w2f48,Uid:8e8146e9-6407-49b7-8cef-e26dac385734,Namespace:calico-apiserver,Attempt:0,}" Jul 10 01:13:29.804807 env[1363]: time="2025-07-10T01:13:29.804718355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-b2wqb,Uid:74cf1bc5-5d5a-4dc7-850a-71013984af05,Namespace:calico-apiserver,Attempt:0,}" Jul 10 01:13:29.806245 env[1363]: time="2025-07-10T01:13:29.806149847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c5d4d86b-jc5cs,Uid:389b9b23-8476-4f37-b3d0-fe7b86da7d65,Namespace:calico-system,Attempt:0,}" Jul 10 01:13:29.830054 env[1363]: time="2025-07-10T01:13:29.829904760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-snhl5,Uid:3459c244-a1ae-43bc-ad86-239a6e665757,Namespace:kube-system,Attempt:0,}" Jul 10 01:13:29.839686 env[1363]: time="2025-07-10T01:13:29.839658857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zxwst,Uid:ced04dc5-79ee-4a07-a568-b0fd4007f64c,Namespace:calico-system,Attempt:0,}" Jul 10 01:13:30.496124 env[1363]: time="2025-07-10T01:13:30.496097572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b48c6,Uid:c15a8f19-7056-4133-9713-c590210e2422,Namespace:calico-system,Attempt:0,}" Jul 10 01:13:31.145877 env[1363]: time="2025-07-10T01:13:31.145824215Z" level=error msg="Failed to destroy network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.146230 env[1363]: time="2025-07-10T01:13:31.146210599Z" level=error msg="encountered an error cleaning up failed sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.146323 env[1363]: time="2025-07-10T01:13:31.146305424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4k5ld,Uid:a29ef6dc-4246-436d-87dd-9c8e96247aeb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.147617 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b-shm.mount: Deactivated successfully. Jul 10 01:13:31.153585 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9-shm.mount: Deactivated successfully. Jul 10 01:13:31.159519 env[1363]: time="2025-07-10T01:13:31.150784076Z" level=error msg="Failed to destroy network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.159519 env[1363]: time="2025-07-10T01:13:31.150997567Z" level=error msg="encountered an error cleaning up failed sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.159519 env[1363]: time="2025-07-10T01:13:31.151024776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66c5d4d86b-jc5cs,Uid:389b9b23-8476-4f37-b3d0-fe7b86da7d65,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.159519 env[1363]: time="2025-07-10T01:13:31.155730683Z" level=error msg="Failed to destroy network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.159519 env[1363]: time="2025-07-10T01:13:31.155973822Z" level=error msg="encountered an error cleaning up failed sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.159519 env[1363]: time="2025-07-10T01:13:31.156007933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b48c6,Uid:c15a8f19-7056-4133-9713-c590210e2422,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.162477 env[1363]: time="2025-07-10T01:13:31.162449697Z" level=error msg="Failed to destroy network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.162753 env[1363]: time="2025-07-10T01:13:31.162735667Z" level=error msg="encountered an error cleaning up failed sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.162829 env[1363]: time="2025-07-10T01:13:31.162810730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-b2wqb,Uid:74cf1bc5-5d5a-4dc7-850a-71013984af05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.163417 env[1363]: time="2025-07-10T01:13:31.163387337Z" level=error msg="Failed to destroy network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.163604 env[1363]: time="2025-07-10T01:13:31.163583065Z" level=error msg="encountered an error cleaning up failed sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.169168 env[1363]: time="2025-07-10T01:13:31.163608159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-w2f48,Uid:8e8146e9-6407-49b7-8cef-e26dac385734,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.166589 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096-shm.mount: Deactivated successfully. Jul 10 01:13:31.166683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943-shm.mount: Deactivated successfully. Jul 10 01:13:31.178881 env[1363]: time="2025-07-10T01:13:31.173337429Z" level=error msg="Failed to destroy network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.178881 env[1363]: time="2025-07-10T01:13:31.173604257Z" level=error msg="encountered an error cleaning up failed sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.178881 env[1363]: time="2025-07-10T01:13:31.173634412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5477ff879d-j2p5q,Uid:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.178881 env[1363]: time="2025-07-10T01:13:31.175693434Z" level=error msg="Failed to destroy network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.178881 env[1363]: time="2025-07-10T01:13:31.177183639Z" level=error msg="encountered an error cleaning up failed sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.178881 env[1363]: time="2025-07-10T01:13:31.177214079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-snhl5,Uid:3459c244-a1ae-43bc-ad86-239a6e665757,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.166740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4-shm.mount: Deactivated successfully. Jul 10 01:13:31.176104 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6-shm.mount: Deactivated successfully. Jul 10 01:13:31.177952 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5-shm.mount: Deactivated successfully. Jul 10 01:13:31.201040 env[1363]: time="2025-07-10T01:13:31.185490730Z" level=error msg="Failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.201040 env[1363]: time="2025-07-10T01:13:31.187195926Z" level=error msg="encountered an error cleaning up failed sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.201040 env[1363]: time="2025-07-10T01:13:31.187225936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zxwst,Uid:ced04dc5-79ee-4a07-a568-b0fd4007f64c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.201258 kubelet[2299]: E0710 01:13:31.179436 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.201258 kubelet[2299]: E0710 01:13:31.191755 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-snhl5" Jul 10 01:13:31.187057 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb-shm.mount: Deactivated successfully. Jul 10 01:13:31.208046 kubelet[2299]: E0710 01:13:31.208010 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.208161 kubelet[2299]: E0710 01:13:31.208059 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b48c6" Jul 10 01:13:31.208161 kubelet[2299]: E0710 01:13:31.208092 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-b48c6" Jul 10 01:13:31.208161 kubelet[2299]: E0710 01:13:31.208127 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-b48c6_calico-system(c15a8f19-7056-4133-9713-c590210e2422)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-b48c6_calico-system(c15a8f19-7056-4133-9713-c590210e2422)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:31.208254 kubelet[2299]: E0710 01:13:31.208167 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.208254 kubelet[2299]: E0710 01:13:31.208180 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" Jul 10 01:13:31.208254 kubelet[2299]: E0710 01:13:31.208189 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" Jul 10 01:13:31.208320 kubelet[2299]: E0710 01:13:31.208212 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d44674bc4-b2wqb_calico-apiserver(74cf1bc5-5d5a-4dc7-850a-71013984af05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d44674bc4-b2wqb_calico-apiserver(74cf1bc5-5d5a-4dc7-850a-71013984af05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" Jul 10 01:13:31.208320 kubelet[2299]: E0710 01:13:31.208234 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.208320 kubelet[2299]: E0710 01:13:31.208247 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" Jul 10 01:13:31.214783 kubelet[2299]: E0710 01:13:31.208256 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" Jul 10 01:13:31.214783 kubelet[2299]: E0710 01:13:31.208271 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d44674bc4-w2f48_calico-apiserver(8e8146e9-6407-49b7-8cef-e26dac385734)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d44674bc4-w2f48_calico-apiserver(8e8146e9-6407-49b7-8cef-e26dac385734)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" Jul 10 01:13:31.214783 kubelet[2299]: E0710 01:13:31.208287 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.222292 kubelet[2299]: E0710 01:13:31.208296 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" Jul 10 01:13:31.222292 kubelet[2299]: E0710 01:13:31.208304 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" Jul 10 01:13:31.222292 kubelet[2299]: E0710 01:13:31.208315 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5477ff879d-j2p5q_calico-system(5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5477ff879d-j2p5q_calico-system(5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" Jul 10 01:13:31.222520 kubelet[2299]: E0710 01:13:31.208431 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.222520 kubelet[2299]: E0710 01:13:31.208443 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66c5d4d86b-jc5cs" Jul 10 01:13:31.222520 kubelet[2299]: E0710 01:13:31.208450 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-66c5d4d86b-jc5cs" Jul 10 01:13:31.222637 kubelet[2299]: E0710 01:13:31.208474 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-66c5d4d86b-jc5cs_calico-system(389b9b23-8476-4f37-b3d0-fe7b86da7d65)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-66c5d4d86b-jc5cs_calico-system(389b9b23-8476-4f37-b3d0-fe7b86da7d65)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66c5d4d86b-jc5cs" podUID="389b9b23-8476-4f37-b3d0-fe7b86da7d65" Jul 10 01:13:31.222637 kubelet[2299]: E0710 01:13:31.208537 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.225850 kubelet[2299]: E0710 01:13:31.208564 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-zxwst" Jul 10 01:13:31.225904 kubelet[2299]: E0710 01:13:31.225865 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-zxwst" Jul 10 01:13:31.225948 kubelet[2299]: E0710 01:13:31.225911 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-zxwst_calico-system(ced04dc5-79ee-4a07-a568-b0fd4007f64c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-zxwst_calico-system(ced04dc5-79ee-4a07-a568-b0fd4007f64c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-zxwst" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" Jul 10 01:13:31.225948 kubelet[2299]: E0710 01:13:31.225943 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-snhl5" Jul 10 01:13:31.226043 kubelet[2299]: E0710 01:13:31.225959 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-snhl5_kube-system(3459c244-a1ae-43bc-ad86-239a6e665757)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-snhl5_kube-system(3459c244-a1ae-43bc-ad86-239a6e665757)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-snhl5" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" Jul 10 01:13:31.236926 kubelet[2299]: E0710 01:13:31.236894 2299 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.237065 kubelet[2299]: E0710 01:13:31.236945 2299 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4k5ld" Jul 10 01:13:31.237065 kubelet[2299]: E0710 01:13:31.236960 2299 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-4k5ld" Jul 10 01:13:31.237065 kubelet[2299]: E0710 01:13:31.236987 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-4k5ld_kube-system(a29ef6dc-4246-436d-87dd-9c8e96247aeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-4k5ld_kube-system(a29ef6dc-4246-436d-87dd-9c8e96247aeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4k5ld" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" Jul 10 01:13:31.608660 kubelet[2299]: I0710 01:13:31.608448 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:13:31.610808 kubelet[2299]: I0710 01:13:31.610760 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:13:31.656128 kubelet[2299]: I0710 01:13:31.656102 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:13:31.663623 env[1363]: time="2025-07-10T01:13:31.663588567Z" level=info msg="StopPodSandbox for \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\"" Jul 10 01:13:31.664939 env[1363]: time="2025-07-10T01:13:31.664310218Z" level=info msg="StopPodSandbox for \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\"" Jul 10 01:13:31.673137 env[1363]: time="2025-07-10T01:13:31.673085166Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:13:31.677598 kubelet[2299]: I0710 01:13:31.677225 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:13:31.680379 env[1363]: time="2025-07-10T01:13:31.680347871Z" level=info msg="StopPodSandbox for \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\"" Jul 10 01:13:31.685678 kubelet[2299]: I0710 01:13:31.685618 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:13:31.687536 env[1363]: time="2025-07-10T01:13:31.687486537Z" level=info msg="StopPodSandbox for \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\"" Jul 10 01:13:31.688785 kubelet[2299]: I0710 01:13:31.688365 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:13:31.688873 env[1363]: time="2025-07-10T01:13:31.688844904Z" level=info msg="StopPodSandbox for \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\"" Jul 10 01:13:31.691878 kubelet[2299]: I0710 01:13:31.691397 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:13:31.692465 env[1363]: time="2025-07-10T01:13:31.692423909Z" level=info msg="StopPodSandbox for \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\"" Jul 10 01:13:31.694250 kubelet[2299]: I0710 01:13:31.693856 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:13:31.695754 env[1363]: time="2025-07-10T01:13:31.694292811Z" level=info msg="StopPodSandbox for \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\"" Jul 10 01:13:31.765313 env[1363]: time="2025-07-10T01:13:31.765271373Z" level=error msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" failed" error="failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.765747 kubelet[2299]: E0710 01:13:31.765632 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:13:31.781897 env[1363]: time="2025-07-10T01:13:31.781850073Z" level=error msg="StopPodSandbox for \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\" failed" error="failed to destroy network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.783177 kubelet[2299]: E0710 01:13:31.783146 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:13:31.783246 env[1363]: time="2025-07-10T01:13:31.783225056Z" level=error msg="StopPodSandbox for \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\" failed" error="failed to destroy network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.800674 env[1363]: time="2025-07-10T01:13:31.800613385Z" level=error msg="StopPodSandbox for \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\" failed" error="failed to destroy network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.805584 kubelet[2299]: E0710 01:13:31.805338 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:13:31.805584 kubelet[2299]: E0710 01:13:31.805376 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943"} Jul 10 01:13:31.805584 kubelet[2299]: E0710 01:13:31.805423 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.805584 kubelet[2299]: E0710 01:13:31.805442 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" Jul 10 01:13:31.805584 kubelet[2299]: E0710 01:13:31.774668 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb"} Jul 10 01:13:31.805902 kubelet[2299]: E0710 01:13:31.805475 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.805902 kubelet[2299]: E0710 01:13:31.783187 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6"} Jul 10 01:13:31.805902 kubelet[2299]: E0710 01:13:31.805489 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-zxwst" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" Jul 10 01:13:31.805902 kubelet[2299]: E0710 01:13:31.805509 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:13:31.806074 kubelet[2299]: E0710 01:13:31.805518 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.806074 kubelet[2299]: E0710 01:13:31.805526 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5"} Jul 10 01:13:31.806074 kubelet[2299]: E0710 01:13:31.805544 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.806074 kubelet[2299]: E0710 01:13:31.805559 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-snhl5" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" Jul 10 01:13:31.806258 kubelet[2299]: E0710 01:13:31.805537 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" Jul 10 01:13:31.820890 env[1363]: time="2025-07-10T01:13:31.820823533Z" level=error msg="StopPodSandbox for \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\" failed" error="failed to destroy network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.821047 kubelet[2299]: E0710 01:13:31.821016 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:13:31.821110 kubelet[2299]: E0710 01:13:31.821058 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096"} Jul 10 01:13:31.821110 kubelet[2299]: E0710 01:13:31.821088 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c15a8f19-7056-4133-9713-c590210e2422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.821215 kubelet[2299]: E0710 01:13:31.821108 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c15a8f19-7056-4133-9713-c590210e2422\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-b48c6" podUID="c15a8f19-7056-4133-9713-c590210e2422" Jul 10 01:13:31.835077 env[1363]: time="2025-07-10T01:13:31.835000396Z" level=error msg="StopPodSandbox for \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\" failed" error="failed to destroy network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.835357 kubelet[2299]: E0710 01:13:31.835313 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:13:31.835415 kubelet[2299]: E0710 01:13:31.835370 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9"} Jul 10 01:13:31.835415 kubelet[2299]: E0710 01:13:31.835395 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.835415 kubelet[2299]: E0710 01:13:31.835409 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-66c5d4d86b-jc5cs" podUID="389b9b23-8476-4f37-b3d0-fe7b86da7d65" Jul 10 01:13:31.846132 kubelet[2299]: E0710 01:13:31.841396 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:13:31.846132 kubelet[2299]: E0710 01:13:31.841423 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4"} Jul 10 01:13:31.846132 kubelet[2299]: E0710 01:13:31.841457 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.846132 kubelet[2299]: E0710 01:13:31.841472 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" Jul 10 01:13:31.846404 env[1363]: time="2025-07-10T01:13:31.841267965Z" level=error msg="StopPodSandbox for \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\" failed" error="failed to destroy network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.846553 env[1363]: time="2025-07-10T01:13:31.846526822Z" level=error msg="StopPodSandbox for \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\" failed" error="failed to destroy network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 10 01:13:31.846818 kubelet[2299]: E0710 01:13:31.846713 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:13:31.846818 kubelet[2299]: E0710 01:13:31.846754 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b"} Jul 10 01:13:31.846818 kubelet[2299]: E0710 01:13:31.846777 2299 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 10 01:13:31.846818 kubelet[2299]: E0710 01:13:31.846794 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-4k5ld" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" Jul 10 01:13:37.442198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1813845069.mount: Deactivated successfully. Jul 10 01:13:37.516451 env[1363]: time="2025-07-10T01:13:37.516401895Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:37.527954 env[1363]: time="2025-07-10T01:13:37.527914164Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:37.530281 env[1363]: time="2025-07-10T01:13:37.530258801Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:37.532170 env[1363]: time="2025-07-10T01:13:37.532147257Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:37.532367 env[1363]: time="2025-07-10T01:13:37.532342614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 10 01:13:37.694687 env[1363]: time="2025-07-10T01:13:37.694345515Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 10 01:13:37.715664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1527582404.mount: Deactivated successfully. Jul 10 01:13:37.722007 env[1363]: time="2025-07-10T01:13:37.719530408Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\"" Jul 10 01:13:37.722007 env[1363]: time="2025-07-10T01:13:37.719917314Z" level=info msg="StartContainer for \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\"" Jul 10 01:13:37.775524 env[1363]: time="2025-07-10T01:13:37.775492794Z" level=info msg="StartContainer for \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\" returns successfully" Jul 10 01:13:38.236310 kubelet[2299]: I0710 01:13:38.236273 2299 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 01:13:38.835159 systemd[1]: run-containerd-runc-k8s.io-dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9-runc.qidaT3.mount: Deactivated successfully. Jul 10 01:13:38.880279 kubelet[2299]: I0710 01:13:38.863860 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2k6z4" podStartSLOduration=2.183542989 podStartE2EDuration="24.849277102s" podCreationTimestamp="2025-07-10 01:13:14 +0000 UTC" firstStartedPulling="2025-07-10 01:13:14.867572436 +0000 UTC m=+14.735999315" lastFinishedPulling="2025-07-10 01:13:37.53330654 +0000 UTC m=+37.401733428" observedRunningTime="2025-07-10 01:13:38.815408255 +0000 UTC m=+38.683835145" watchObservedRunningTime="2025-07-10 01:13:38.849277102 +0000 UTC m=+38.717703992" Jul 10 01:13:39.321214 kernel: kauditd_printk_skb: 25 callbacks suppressed Jul 10 01:13:39.336727 kernel: audit: type=1325 audit(1752110019.315:295): table=filter:99 family=2 entries=21 op=nft_register_rule pid=3493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:39.352929 kernel: audit: type=1300 audit(1752110019.315:295): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff084e3700 a2=0 a3=7fff084e36ec items=0 ppid=2398 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:39.352991 kernel: audit: type=1327 audit(1752110019.315:295): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:39.353013 kernel: audit: type=1325 audit(1752110019.328:296): table=nat:100 family=2 entries=19 op=nft_register_chain pid=3493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:39.359070 kernel: audit: type=1300 audit(1752110019.328:296): arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff084e3700 a2=0 a3=7fff084e36ec items=0 ppid=2398 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:39.362329 kernel: audit: type=1327 audit(1752110019.328:296): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:39.315000 audit[3493]: NETFILTER_CFG table=filter:99 family=2 entries=21 op=nft_register_rule pid=3493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:39.315000 audit[3493]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7fff084e3700 a2=0 a3=7fff084e36ec items=0 ppid=2398 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:39.315000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:39.328000 audit[3493]: NETFILTER_CFG table=nat:100 family=2 entries=19 op=nft_register_chain pid=3493 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:39.328000 audit[3493]: SYSCALL arch=c000003e syscall=46 success=yes exit=6276 a0=3 a1=7fff084e3700 a2=0 a3=7fff084e36ec items=0 ppid=2398 pid=3493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:39.328000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:39.525357 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 10 01:13:39.528021 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 10 01:13:39.733704 systemd[1]: run-containerd-runc-k8s.io-dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9-runc.YkA7VY.mount: Deactivated successfully. Jul 10 01:13:40.206626 env[1363]: time="2025-07-10T01:13:40.206596094Z" level=info msg="StopPodSandbox for \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\"" Jul 10 01:13:41.218000 audit[3624]: AVC avc: denied { write } for pid=3624 comm="tee" name="fd" dev="proc" ino=36439 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.229548 kernel: audit: type=1400 audit(1752110021.218:297): avc: denied { write } for pid=3624 comm="tee" name="fd" dev="proc" ino=36439 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.229606 kernel: audit: type=1400 audit(1752110021.218:298): avc: denied { write } for pid=3619 comm="tee" name="fd" dev="proc" ino=37203 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.229626 kernel: audit: type=1300 audit(1752110021.218:298): arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc3e17c7dd a2=241 a3=1b6 items=1 ppid=3586 pid=3619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.218000 audit[3619]: AVC avc: denied { write } for pid=3619 comm="tee" name="fd" dev="proc" ino=37203 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.218000 audit[3619]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffc3e17c7dd a2=241 a3=1b6 items=1 ppid=3586 pid=3619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.218000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 10 01:13:41.237656 kernel: audit: type=1307 audit(1752110021.218:298): cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 10 01:13:41.218000 audit: PATH item=0 name="/dev/fd/63" inode=36427 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:13:41.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:13:41.218000 audit[3624]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe69ec7ed a2=241 a3=1b6 items=1 ppid=3585 pid=3624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.218000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 10 01:13:41.218000 audit: PATH item=0 name="/dev/fd/63" inode=37194 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:13:41.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:13:41.253000 audit[3629]: AVC avc: denied { write } for pid=3629 comm="tee" name="fd" dev="proc" ino=37211 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.253000 audit[3629]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd37a067ed a2=241 a3=1b6 items=1 ppid=3580 pid=3629 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.253000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 10 01:13:41.253000 audit: PATH item=0 name="/dev/fd/63" inode=37195 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:13:41.253000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:13:41.261000 audit[3632]: AVC avc: denied { write } for pid=3632 comm="tee" name="fd" dev="proc" ino=37215 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.261000 audit[3632]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffe37c7b7ef a2=241 a3=1b6 items=1 ppid=3577 pid=3632 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.261000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 10 01:13:41.261000 audit: PATH item=0 name="/dev/fd/63" inode=36430 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:13:41.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:13:41.262000 audit[3634]: AVC avc: denied { write } for pid=3634 comm="tee" name="fd" dev="proc" ino=37219 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.262000 audit[3634]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffec01aa7ed a2=241 a3=1b6 items=1 ppid=3582 pid=3634 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.262000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 10 01:13:41.262000 audit: PATH item=0 name="/dev/fd/63" inode=37199 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:13:41.262000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:13:41.272000 audit[3638]: AVC avc: denied { write } for pid=3638 comm="tee" name="fd" dev="proc" ino=37223 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.272000 audit[3638]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff044467ee a2=241 a3=1b6 items=1 ppid=3595 pid=3638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.272000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 10 01:13:41.272000 audit: PATH item=0 name="/dev/fd/63" inode=37200 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:13:41.272000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:13:41.288000 audit[3650]: AVC avc: denied { write } for pid=3650 comm="tee" name="fd" dev="proc" ino=36450 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:13:41.288000 audit[3650]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd3a3747de a2=241 a3=1b6 items=1 ppid=3579 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.288000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 10 01:13:41.288000 audit: PATH item=0 name="/dev/fd/63" inode=36445 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:13:41.288000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.512000 audit: BPF prog-id=10 op=LOAD Jul 10 01:13:41.512000 audit[3667]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2cb5c540 a2=98 a3=1fffffffffffffff items=0 ppid=3587 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.512000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:13:41.513000 audit: BPF prog-id=10 op=UNLOAD Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.520000 audit: BPF prog-id=11 op=LOAD Jul 10 01:13:41.520000 audit[3667]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2cb5c420 a2=94 a3=3 items=0 ppid=3587 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.520000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:13:41.536000 audit: BPF prog-id=11 op=UNLOAD Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { bpf } for pid=3667 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit: BPF prog-id=12 op=LOAD Jul 10 01:13:41.536000 audit[3667]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2cb5c460 a2=94 a3=7ffc2cb5c640 items=0 ppid=3587 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.536000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:13:41.536000 audit: BPF prog-id=12 op=UNLOAD Jul 10 01:13:41.536000 audit[3667]: AVC avc: denied { perfmon } for pid=3667 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.536000 audit[3667]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffc2cb5c530 a2=50 a3=a000000085 items=0 ppid=3587 pid=3667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.536000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.555000 audit: BPF prog-id=13 op=LOAD Jul 10 01:13:41.555000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc2428b590 a2=98 a3=3 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.555000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.555000 audit: BPF prog-id=13 op=UNLOAD Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.562000 audit: BPF prog-id=14 op=LOAD Jul 10 01:13:41.562000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2428b380 a2=94 a3=54428f items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.562000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.565000 audit: BPF prog-id=14 op=UNLOAD Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.565000 audit: BPF prog-id=15 op=LOAD Jul 10 01:13:41.565000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2428b3b0 a2=94 a3=2 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.565000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.565000 audit: BPF prog-id=15 op=UNLOAD Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:40.467 [INFO][3556] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:40.469 [INFO][3556] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" iface="eth0" netns="/var/run/netns/cni-b5acd437-4ec6-383c-eadc-855784c16fd8" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:40.469 [INFO][3556] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" iface="eth0" netns="/var/run/netns/cni-b5acd437-4ec6-383c-eadc-855784c16fd8" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:40.473 [INFO][3556] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" iface="eth0" netns="/var/run/netns/cni-b5acd437-4ec6-383c-eadc-855784c16fd8" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:40.473 [INFO][3556] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:40.473 [INFO][3556] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:41.546 [INFO][3564] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:41.557 [INFO][3564] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:41.558 [INFO][3564] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:41.611 [WARNING][3564] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:41.613 [INFO][3564] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:41.614 [INFO][3564] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:41.617456 env[1363]: 2025-07-10 01:13:41.615 [INFO][3556] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:13:41.622456 systemd[1]: run-netns-cni\x2db5acd437\x2d4ec6\x2d383c\x2deadc\x2d855784c16fd8.mount: Deactivated successfully. Jul 10 01:13:41.623230 env[1363]: time="2025-07-10T01:13:41.622961062Z" level=info msg="TearDown network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\" successfully" Jul 10 01:13:41.623230 env[1363]: time="2025-07-10T01:13:41.622985018Z" level=info msg="StopPodSandbox for \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\" returns successfully" Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.660000 audit: BPF prog-id=16 op=LOAD Jul 10 01:13:41.660000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffc2428b270 a2=94 a3=1 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.660000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.661000 audit: BPF prog-id=16 op=UNLOAD Jul 10 01:13:41.661000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.661000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffc2428b340 a2=50 a3=7ffc2428b420 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.661000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2428b280 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2428b2b0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2428b1c0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2428b2d0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2428b2b0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2428b2a0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2428b2d0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2428b2b0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2428b2d0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc2428b2a0 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffc2428b310 a2=28 a3=0 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc2428b0c0 a2=50 a3=1 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.668000 audit: BPF prog-id=17 op=LOAD Jul 10 01:13:41.668000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc2428b0c0 a2=94 a3=5 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.668000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.669000 audit: BPF prog-id=17 op=UNLOAD Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffc2428b170 a2=50 a3=1 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.669000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffc2428b290 a2=4 a3=38 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.669000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { confidentiality } for pid=3669 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:13:41.669000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2428b2e0 a2=94 a3=6 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.669000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { confidentiality } for pid=3669 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:13:41.669000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2428aa90 a2=94 a3=88 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.669000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.669000 audit[3669]: AVC avc: denied { confidentiality } for pid=3669 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:13:41.669000 audit[3669]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffc2428aa90 a2=94 a3=88 items=0 ppid=3587 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.669000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.734000 audit: BPF prog-id=18 op=LOAD Jul 10 01:13:41.734000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6b454480 a2=98 a3=1999999999999999 items=0 ppid=3587 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.734000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 01:13:41.736000 audit: BPF prog-id=18 op=UNLOAD Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit: BPF prog-id=19 op=LOAD Jul 10 01:13:41.736000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6b454360 a2=94 a3=ffff items=0 ppid=3587 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.736000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 01:13:41.736000 audit: BPF prog-id=19 op=UNLOAD Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { perfmon } for pid=3694 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit[3694]: AVC avc: denied { bpf } for pid=3694 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.736000 audit: BPF prog-id=20 op=LOAD Jul 10 01:13:41.736000 audit[3694]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc6b4543a0 a2=94 a3=7ffc6b454580 items=0 ppid=3587 pid=3694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.736000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 01:13:41.736000 audit: BPF prog-id=20 op=UNLOAD Jul 10 01:13:41.744558 kubelet[2299]: I0710 01:13:41.740002 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-backend-key-pair\") pod \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\" (UID: \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\") " Jul 10 01:13:41.744558 kubelet[2299]: I0710 01:13:41.740055 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2v5rc\" (UniqueName: \"kubernetes.io/projected/389b9b23-8476-4f37-b3d0-fe7b86da7d65-kube-api-access-2v5rc\") pod \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\" (UID: \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\") " Jul 10 01:13:41.745918 kubelet[2299]: I0710 01:13:41.745901 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-ca-bundle\") pod \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\" (UID: \"389b9b23-8476-4f37-b3d0-fe7b86da7d65\") " Jul 10 01:13:41.805871 kubelet[2299]: I0710 01:13:41.800482 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "389b9b23-8476-4f37-b3d0-fe7b86da7d65" (UID: "389b9b23-8476-4f37-b3d0-fe7b86da7d65"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 01:13:41.810589 systemd[1]: var-lib-kubelet-pods-389b9b23\x2d8476\x2d4f37\x2db3d0\x2dfe7b86da7d65-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2v5rc.mount: Deactivated successfully. Jul 10 01:13:41.810695 systemd[1]: var-lib-kubelet-pods-389b9b23\x2d8476\x2d4f37\x2db3d0\x2dfe7b86da7d65-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 01:13:41.813785 kubelet[2299]: I0710 01:13:41.813725 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "389b9b23-8476-4f37-b3d0-fe7b86da7d65" (UID: "389b9b23-8476-4f37-b3d0-fe7b86da7d65"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 01:13:41.814869 kubelet[2299]: I0710 01:13:41.814826 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/389b9b23-8476-4f37-b3d0-fe7b86da7d65-kube-api-access-2v5rc" (OuterVolumeSpecName: "kube-api-access-2v5rc") pod "389b9b23-8476-4f37-b3d0-fe7b86da7d65" (UID: "389b9b23-8476-4f37-b3d0-fe7b86da7d65"). InnerVolumeSpecName "kube-api-access-2v5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:13:41.835416 systemd-networkd[1114]: vxlan.calico: Link UP Jul 10 01:13:41.835423 systemd-networkd[1114]: vxlan.calico: Gained carrier Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.837000 audit: BPF prog-id=21 op=LOAD Jul 10 01:13:41.837000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc83a98510 a2=98 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.837000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.838000 audit: BPF prog-id=21 op=UNLOAD Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit: BPF prog-id=22 op=LOAD Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc83a98320 a2=94 a3=54428f items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit: BPF prog-id=22 op=UNLOAD Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit: BPF prog-id=23 op=LOAD Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffc83a98350 a2=94 a3=2 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit: BPF prog-id=23 op=UNLOAD Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc83a98220 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc83a98250 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc83a98160 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc83a98270 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc83a98250 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc83a98240 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc83a98270 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc83a98250 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc83a98270 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffc83a98240 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7ffc83a982b0 a2=28 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.839000 audit: BPF prog-id=24 op=LOAD Jul 10 01:13:41.839000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc83a98120 a2=94 a3=0 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.839000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.839000 audit: BPF prog-id=24 op=UNLOAD Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7ffc83a98110 a2=50 a3=2800 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7ffc83a98110 a2=50 a3=2800 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit: BPF prog-id=25 op=LOAD Jul 10 01:13:41.843000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc83a97930 a2=94 a3=2 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.843000 audit: BPF prog-id=25 op=UNLOAD Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { perfmon } for pid=3722 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit[3722]: AVC avc: denied { bpf } for pid=3722 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.843000 audit: BPF prog-id=26 op=LOAD Jul 10 01:13:41.843000 audit[3722]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffc83a97a30 a2=94 a3=30 items=0 ppid=3587 pid=3722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.843000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:13:41.849906 kubelet[2299]: I0710 01:13:41.848329 2299 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 01:13:41.849906 kubelet[2299]: I0710 01:13:41.848346 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2v5rc\" (UniqueName: \"kubernetes.io/projected/389b9b23-8476-4f37-b3d0-fe7b86da7d65-kube-api-access-2v5rc\") on node \"localhost\" DevicePath \"\"" Jul 10 01:13:41.849906 kubelet[2299]: I0710 01:13:41.848352 2299 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/389b9b23-8476-4f37-b3d0-fe7b86da7d65-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.850000 audit: BPF prog-id=27 op=LOAD Jul 10 01:13:41.850000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffe74db2bf0 a2=98 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.850000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.850000 audit: BPF prog-id=27 op=UNLOAD Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit: BPF prog-id=28 op=LOAD Jul 10 01:13:41.851000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe74db29e0 a2=94 a3=54428f items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.851000 audit: BPF prog-id=28 op=UNLOAD Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.851000 audit: BPF prog-id=29 op=LOAD Jul 10 01:13:41.851000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe74db2a10 a2=94 a3=2 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.851000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.851000 audit: BPF prog-id=29 op=UNLOAD Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit: BPF prog-id=30 op=LOAD Jul 10 01:13:41.927000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffe74db28d0 a2=94 a3=1 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.927000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.927000 audit: BPF prog-id=30 op=UNLOAD Jul 10 01:13:41.927000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.927000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffe74db29a0 a2=50 a3=7ffe74db2a80 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.927000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe74db28e0 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe74db2910 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe74db2820 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe74db2930 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe74db2910 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe74db2900 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe74db2930 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe74db2910 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe74db2930 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffe74db2900 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffe74db2970 a2=28 a3=0 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe74db2720 a2=50 a3=1 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit: BPF prog-id=31 op=LOAD Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffe74db2720 a2=94 a3=5 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit: BPF prog-id=31 op=UNLOAD Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffe74db27d0 a2=50 a3=1 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffe74db28f0 a2=4 a3=38 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.934000 audit[3727]: AVC avc: denied { confidentiality } for pid=3727 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:13:41.934000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe74db2940 a2=94 a3=6 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.934000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { confidentiality } for pid=3727 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:13:41.935000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe74db20f0 a2=94 a3=88 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { perfmon } for pid=3727 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { confidentiality } for pid=3727 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:13:41.935000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffe74db20f0 a2=94 a3=88 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe74db3b20 a2=10 a3=f8f00800 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe74db39c0 a2=10 a3=3 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe74db3960 a2=10 a3=3 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.935000 audit[3727]: AVC avc: denied { bpf } for pid=3727 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:13:41.935000 audit[3727]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffe74db3960 a2=10 a3=7 items=0 ppid=3587 pid=3727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:41.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:13:41.959000 audit: BPF prog-id=26 op=UNLOAD Jul 10 01:13:42.092000 audit[3758]: NETFILTER_CFG table=mangle:101 family=2 entries=16 op=nft_register_chain pid=3758 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:42.092000 audit[3758]: SYSCALL arch=c000003e syscall=46 success=yes exit=6868 a0=3 a1=7ffe9514f850 a2=0 a3=7ffe9514f83c items=0 ppid=3587 pid=3758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:42.092000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:42.101000 audit[3760]: NETFILTER_CFG table=nat:102 family=2 entries=15 op=nft_register_chain pid=3760 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:42.101000 audit[3760]: SYSCALL arch=c000003e syscall=46 success=yes exit=5084 a0=3 a1=7ffe3f274c10 a2=0 a3=7ffe3f274bfc items=0 ppid=3587 pid=3760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:42.101000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:42.106000 audit[3759]: NETFILTER_CFG table=filter:103 family=2 entries=39 op=nft_register_chain pid=3759 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:42.106000 audit[3759]: SYSCALL arch=c000003e syscall=46 success=yes exit=18968 a0=3 a1=7fff0a8e50e0 a2=0 a3=7fff0a8e50cc items=0 ppid=3587 pid=3759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:42.106000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:42.111000 audit[3757]: NETFILTER_CFG table=raw:104 family=2 entries=21 op=nft_register_chain pid=3757 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:42.111000 audit[3757]: SYSCALL arch=c000003e syscall=46 success=yes exit=8452 a0=3 a1=7fffe3254e90 a2=0 a3=7fffe3254e7c items=0 ppid=3587 pid=3757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:42.111000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:42.375196 kubelet[2299]: I0710 01:13:42.375120 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair\") pod \"whisker-5bc4d9bd7d-nwwj6\" (UID: \"c3f9faf5-cc25-4483-beb9-5dea29a71367\") " pod="calico-system/whisker-5bc4d9bd7d-nwwj6" Jul 10 01:13:42.375196 kubelet[2299]: I0710 01:13:42.375165 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle\") pod \"whisker-5bc4d9bd7d-nwwj6\" (UID: \"c3f9faf5-cc25-4483-beb9-5dea29a71367\") " pod="calico-system/whisker-5bc4d9bd7d-nwwj6" Jul 10 01:13:42.375196 kubelet[2299]: I0710 01:13:42.375183 2299 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dmsf\" (UniqueName: \"kubernetes.io/projected/c3f9faf5-cc25-4483-beb9-5dea29a71367-kube-api-access-9dmsf\") pod \"whisker-5bc4d9bd7d-nwwj6\" (UID: \"c3f9faf5-cc25-4483-beb9-5dea29a71367\") " pod="calico-system/whisker-5bc4d9bd7d-nwwj6" Jul 10 01:13:42.502356 kubelet[2299]: I0710 01:13:42.502323 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="389b9b23-8476-4f37-b3d0-fe7b86da7d65" path="/var/lib/kubelet/pods/389b9b23-8476-4f37-b3d0-fe7b86da7d65/volumes" Jul 10 01:13:42.572528 env[1363]: time="2025-07-10T01:13:42.572500903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bc4d9bd7d-nwwj6,Uid:c3f9faf5-cc25-4483-beb9-5dea29a71367,Namespace:calico-system,Attempt:0,}" Jul 10 01:13:42.745589 systemd-networkd[1114]: cali7a4d6dda698: Link UP Jul 10 01:13:42.746796 systemd-networkd[1114]: cali7a4d6dda698: Gained carrier Jul 10 01:13:42.747654 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7a4d6dda698: link becomes ready Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.645 [INFO][3771] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0 whisker-5bc4d9bd7d- calico-system c3f9faf5-cc25-4483-beb9-5dea29a71367 927 0 2025-07-10 01:13:42 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5bc4d9bd7d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5bc4d9bd7d-nwwj6 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7a4d6dda698 [] [] }} ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.647 [INFO][3771] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.681 [INFO][3785] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.681 [INFO][3785] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d10a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5bc4d9bd7d-nwwj6", "timestamp":"2025-07-10 01:13:42.681174983 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.681 [INFO][3785] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.681 [INFO][3785] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.681 [INFO][3785] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.693 [INFO][3785] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.708 [INFO][3785] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.711 [INFO][3785] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.713 [INFO][3785] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.715 [INFO][3785] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.715 [INFO][3785] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.716 [INFO][3785] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.719 [INFO][3785] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.725 [INFO][3785] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.725 [INFO][3785] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" host="localhost" Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.725 [INFO][3785] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:42.760185 env[1363]: 2025-07-10 01:13:42.725 [INFO][3785] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:13:42.761950 env[1363]: 2025-07-10 01:13:42.734 [INFO][3771] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0", GenerateName:"whisker-5bc4d9bd7d-", Namespace:"calico-system", SelfLink:"", UID:"c3f9faf5-cc25-4483-beb9-5dea29a71367", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bc4d9bd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5bc4d9bd7d-nwwj6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a4d6dda698", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:42.761950 env[1363]: 2025-07-10 01:13:42.734 [INFO][3771] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:13:42.761950 env[1363]: 2025-07-10 01:13:42.734 [INFO][3771] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a4d6dda698 ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:13:42.761950 env[1363]: 2025-07-10 01:13:42.750 [INFO][3771] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:13:42.761950 env[1363]: 2025-07-10 01:13:42.750 [INFO][3771] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0", GenerateName:"whisker-5bc4d9bd7d-", Namespace:"calico-system", SelfLink:"", UID:"c3f9faf5-cc25-4483-beb9-5dea29a71367", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5bc4d9bd7d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff", Pod:"whisker-5bc4d9bd7d-nwwj6", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7a4d6dda698", MAC:"46:b3:e5:fd:be:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:42.761950 env[1363]: 2025-07-10 01:13:42.758 [INFO][3771] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Namespace="calico-system" Pod="whisker-5bc4d9bd7d-nwwj6" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:13:42.772561 env[1363]: time="2025-07-10T01:13:42.772506019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:42.772561 env[1363]: time="2025-07-10T01:13:42.772540230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:42.772714 env[1363]: time="2025-07-10T01:13:42.772547628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:42.772882 env[1363]: time="2025-07-10T01:13:42.772858916Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff pid=3808 runtime=io.containerd.runc.v2 Jul 10 01:13:42.792630 systemd[1]: run-containerd-runc-k8s.io-47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff-runc.aeyGGu.mount: Deactivated successfully. Jul 10 01:13:42.803207 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:42.781000 audit[3813]: NETFILTER_CFG table=filter:105 family=2 entries=59 op=nft_register_chain pid=3813 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:42.781000 audit[3813]: SYSCALL arch=c000003e syscall=46 success=yes exit=35860 a0=3 a1=7fff5b896040 a2=0 a3=7fff5b89602c items=0 ppid=3587 pid=3813 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:42.781000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:42.827492 env[1363]: time="2025-07-10T01:13:42.827467113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5bc4d9bd7d-nwwj6,Uid:c3f9faf5-cc25-4483-beb9-5dea29a71367,Namespace:calico-system,Attempt:0,} returns sandbox id \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:13:42.840290 env[1363]: time="2025-07-10T01:13:42.840267178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 10 01:13:43.494856 env[1363]: time="2025-07-10T01:13:43.494828944Z" level=info msg="StopPodSandbox for \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\"" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.534 [INFO][3860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.534 [INFO][3860] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" iface="eth0" netns="/var/run/netns/cni-302055b3-55b6-e24a-d72d-bcf234858579" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.534 [INFO][3860] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" iface="eth0" netns="/var/run/netns/cni-302055b3-55b6-e24a-d72d-bcf234858579" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.534 [INFO][3860] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" iface="eth0" netns="/var/run/netns/cni-302055b3-55b6-e24a-d72d-bcf234858579" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.534 [INFO][3860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.534 [INFO][3860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.549 [INFO][3867] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.549 [INFO][3867] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.549 [INFO][3867] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.553 [WARNING][3867] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.553 [INFO][3867] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.554 [INFO][3867] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:43.556690 env[1363]: 2025-07-10 01:13:43.555 [INFO][3860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:13:43.557254 env[1363]: time="2025-07-10T01:13:43.557224519Z" level=info msg="TearDown network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\" successfully" Jul 10 01:13:43.557314 env[1363]: time="2025-07-10T01:13:43.557302627Z" level=info msg="StopPodSandbox for \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\" returns successfully" Jul 10 01:13:43.557804 env[1363]: time="2025-07-10T01:13:43.557785011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4k5ld,Uid:a29ef6dc-4246-436d-87dd-9c8e96247aeb,Namespace:kube-system,Attempt:1,}" Jul 10 01:13:43.558940 systemd-networkd[1114]: vxlan.calico: Gained IPv6LL Jul 10 01:13:43.623246 systemd[1]: run-netns-cni\x2d302055b3\x2d55b6\x2de24a\x2dd72d\x2dbcf234858579.mount: Deactivated successfully. Jul 10 01:13:43.639469 systemd-networkd[1114]: cali7006602a141: Link UP Jul 10 01:13:43.641753 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 01:13:43.641808 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7006602a141: link becomes ready Jul 10 01:13:43.641887 systemd-networkd[1114]: cali7006602a141: Gained carrier Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.590 [INFO][3874] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0 coredns-7c65d6cfc9- kube-system a29ef6dc-4246-436d-87dd-9c8e96247aeb 934 0 2025-07-10 01:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-4k5ld eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7006602a141 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.590 [INFO][3874] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.608 [INFO][3887] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.608 [INFO][3887] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5250), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-4k5ld", "timestamp":"2025-07-10 01:13:43.608746859 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.608 [INFO][3887] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.608 [INFO][3887] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.608 [INFO][3887] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.613 [INFO][3887] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.619 [INFO][3887] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.625 [INFO][3887] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.626 [INFO][3887] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.627 [INFO][3887] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.627 [INFO][3887] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.628 [INFO][3887] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319 Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.630 [INFO][3887] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.634 [INFO][3887] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.634 [INFO][3887] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" host="localhost" Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.635 [INFO][3887] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:43.652842 env[1363]: 2025-07-10 01:13:43.635 [INFO][3887] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.654596 env[1363]: 2025-07-10 01:13:43.636 [INFO][3874] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a29ef6dc-4246-436d-87dd-9c8e96247aeb", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-4k5ld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7006602a141", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:43.654596 env[1363]: 2025-07-10 01:13:43.636 [INFO][3874] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.654596 env[1363]: 2025-07-10 01:13:43.636 [INFO][3874] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7006602a141 ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.654596 env[1363]: 2025-07-10 01:13:43.642 [INFO][3874] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.654596 env[1363]: 2025-07-10 01:13:43.644 [INFO][3874] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a29ef6dc-4246-436d-87dd-9c8e96247aeb", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319", Pod:"coredns-7c65d6cfc9-4k5ld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7006602a141", MAC:"12:40:37:57:3b:91", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:43.654596 env[1363]: 2025-07-10 01:13:43.651 [INFO][3874] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Namespace="kube-system" Pod="coredns-7c65d6cfc9-4k5ld" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:13:43.659815 env[1363]: time="2025-07-10T01:13:43.659690832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:43.659815 env[1363]: time="2025-07-10T01:13:43.659712898Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:43.659815 env[1363]: time="2025-07-10T01:13:43.659723271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:43.660079 env[1363]: time="2025-07-10T01:13:43.659859230Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319 pid=3909 runtime=io.containerd.runc.v2 Jul 10 01:13:43.691072 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:43.690000 audit[3938]: NETFILTER_CFG table=filter:106 family=2 entries=42 op=nft_register_chain pid=3938 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:43.690000 audit[3938]: SYSCALL arch=c000003e syscall=46 success=yes exit=22552 a0=3 a1=7ffd60045c60 a2=0 a3=7ffd60045c4c items=0 ppid=3587 pid=3938 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:43.690000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:43.713483 env[1363]: time="2025-07-10T01:13:43.713457530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-4k5ld,Uid:a29ef6dc-4246-436d-87dd-9c8e96247aeb,Namespace:kube-system,Attempt:1,} returns sandbox id \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:13:43.716267 env[1363]: time="2025-07-10T01:13:43.716245036Z" level=info msg="CreateContainer within sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 01:13:43.740936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1070267660.mount: Deactivated successfully. Jul 10 01:13:43.744332 env[1363]: time="2025-07-10T01:13:43.744310722Z" level=info msg="CreateContainer within sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\"" Jul 10 01:13:43.745279 env[1363]: time="2025-07-10T01:13:43.745240993Z" level=info msg="StartContainer for \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\"" Jul 10 01:13:43.781821 env[1363]: time="2025-07-10T01:13:43.781798059Z" level=info msg="StartContainer for \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\" returns successfully" Jul 10 01:13:44.429738 env[1363]: time="2025-07-10T01:13:44.429704669Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:44.431035 env[1363]: time="2025-07-10T01:13:44.431017631Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:44.431948 env[1363]: time="2025-07-10T01:13:44.431931118Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:44.434241 env[1363]: time="2025-07-10T01:13:44.432667472Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:44.434241 env[1363]: time="2025-07-10T01:13:44.433084812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 10 01:13:44.437308 env[1363]: time="2025-07-10T01:13:44.437284674Z" level=info msg="CreateContainer within sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 10 01:13:44.453912 env[1363]: time="2025-07-10T01:13:44.453867918Z" level=info msg="CreateContainer within sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\"" Jul 10 01:13:44.457835 env[1363]: time="2025-07-10T01:13:44.457807970Z" level=info msg="StartContainer for \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\"" Jul 10 01:13:44.495348 env[1363]: time="2025-07-10T01:13:44.495325063Z" level=info msg="StopPodSandbox for \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\"" Jul 10 01:13:44.496841 env[1363]: time="2025-07-10T01:13:44.496822590Z" level=info msg="StopPodSandbox for \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\"" Jul 10 01:13:44.517603 env[1363]: time="2025-07-10T01:13:44.517579040Z" level=info msg="StartContainer for \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" returns successfully" Jul 10 01:13:44.523024 env[1363]: time="2025-07-10T01:13:44.522998325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.568 [INFO][4034] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.568 [INFO][4034] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" iface="eth0" netns="/var/run/netns/cni-761fbcd0-c4a0-20c0-0c7c-dba917fd852f" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.568 [INFO][4034] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" iface="eth0" netns="/var/run/netns/cni-761fbcd0-c4a0-20c0-0c7c-dba917fd852f" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.569 [INFO][4034] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" iface="eth0" netns="/var/run/netns/cni-761fbcd0-c4a0-20c0-0c7c-dba917fd852f" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.569 [INFO][4034] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.569 [INFO][4034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.612 [INFO][4052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.612 [INFO][4052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.612 [INFO][4052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.616 [WARNING][4052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.616 [INFO][4052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.617 [INFO][4052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:44.625556 env[1363]: 2025-07-10 01:13:44.620 [INFO][4034] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:13:44.627497 systemd[1]: run-netns-cni\x2d761fbcd0\x2dc4a0\x2d20c0\x2d0c7c\x2ddba917fd852f.mount: Deactivated successfully. Jul 10 01:13:44.627845 env[1363]: time="2025-07-10T01:13:44.627823913Z" level=info msg="TearDown network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\" successfully" Jul 10 01:13:44.628049 env[1363]: time="2025-07-10T01:13:44.628038419Z" level=info msg="StopPodSandbox for \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\" returns successfully" Jul 10 01:13:44.628540 env[1363]: time="2025-07-10T01:13:44.628522478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-snhl5,Uid:3459c244-a1ae-43bc-ad86-239a6e665757,Namespace:kube-system,Attempt:1,}" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.583 [INFO][4035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.584 [INFO][4035] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" iface="eth0" netns="/var/run/netns/cni-dec1ce6b-afcb-7f68-4a59-5a2f18b7b3fc" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.584 [INFO][4035] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" iface="eth0" netns="/var/run/netns/cni-dec1ce6b-afcb-7f68-4a59-5a2f18b7b3fc" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.586 [INFO][4035] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" iface="eth0" netns="/var/run/netns/cni-dec1ce6b-afcb-7f68-4a59-5a2f18b7b3fc" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.586 [INFO][4035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.586 [INFO][4035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.617 [INFO][4059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.617 [INFO][4059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.617 [INFO][4059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.630 [WARNING][4059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.630 [INFO][4059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.631 [INFO][4059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:44.633583 env[1363]: 2025-07-10 01:13:44.632 [INFO][4035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:13:44.638468 systemd[1]: run-netns-cni\x2ddec1ce6b\x2dafcb\x2d7f68\x2d4a59\x2d5a2f18b7b3fc.mount: Deactivated successfully. Jul 10 01:13:44.642172 env[1363]: time="2025-07-10T01:13:44.642130403Z" level=info msg="TearDown network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\" successfully" Jul 10 01:13:44.642260 env[1363]: time="2025-07-10T01:13:44.642237805Z" level=info msg="StopPodSandbox for \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\" returns successfully" Jul 10 01:13:44.642881 env[1363]: time="2025-07-10T01:13:44.642867868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b48c6,Uid:c15a8f19-7056-4133-9713-c590210e2422,Namespace:calico-system,Attempt:1,}" Jul 10 01:13:44.646949 systemd-networkd[1114]: cali7a4d6dda698: Gained IPv6LL Jul 10 01:13:44.728575 systemd-networkd[1114]: calie0bf60675d7: Link UP Jul 10 01:13:44.731812 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 01:13:44.731871 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calie0bf60675d7: link becomes ready Jul 10 01:13:44.732006 systemd-networkd[1114]: calie0bf60675d7: Gained carrier Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.666 [INFO][4067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0 coredns-7c65d6cfc9- kube-system 3459c244-a1ae-43bc-ad86-239a6e665757 950 0 2025-07-10 01:13:01 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-snhl5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie0bf60675d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.666 [INFO][4067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.691 [INFO][4092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.691 [INFO][4092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-snhl5", "timestamp":"2025-07-10 01:13:44.691752691 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.691 [INFO][4092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.691 [INFO][4092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.692 [INFO][4092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.696 [INFO][4092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.699 [INFO][4092] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.709 [INFO][4092] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.711 [INFO][4092] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.712 [INFO][4092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.712 [INFO][4092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.713 [INFO][4092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714 Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.716 [INFO][4092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.720 [INFO][4092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.720 [INFO][4092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" host="localhost" Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.720 [INFO][4092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:44.742417 env[1363]: 2025-07-10 01:13:44.720 [INFO][4092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.745057 env[1363]: 2025-07-10 01:13:44.725 [INFO][4067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3459c244-a1ae-43bc-ad86-239a6e665757", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-snhl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0bf60675d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:44.745057 env[1363]: 2025-07-10 01:13:44.725 [INFO][4067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.745057 env[1363]: 2025-07-10 01:13:44.725 [INFO][4067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0bf60675d7 ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.745057 env[1363]: 2025-07-10 01:13:44.732 [INFO][4067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.745057 env[1363]: 2025-07-10 01:13:44.733 [INFO][4067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3459c244-a1ae-43bc-ad86-239a6e665757", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714", Pod:"coredns-7c65d6cfc9-snhl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0bf60675d7", MAC:"2a:d3:17:11:f3:4c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:44.745057 env[1363]: 2025-07-10 01:13:44.738 [INFO][4067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Namespace="kube-system" Pod="coredns-7c65d6cfc9-snhl5" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:13:44.755332 env[1363]: time="2025-07-10T01:13:44.755289775Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:44.755465 env[1363]: time="2025-07-10T01:13:44.755450363Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:44.755540 env[1363]: time="2025-07-10T01:13:44.755527059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:44.755905 env[1363]: time="2025-07-10T01:13:44.755863944Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714 pid=4121 runtime=io.containerd.runc.v2 Jul 10 01:13:44.763924 kernel: kauditd_printk_skb: 559 callbacks suppressed Jul 10 01:13:44.768077 kernel: audit: type=1325 audit(1752110024.758:408): table=filter:107 family=2 entries=42 op=nft_register_chain pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:44.768129 kernel: audit: type=1300 audit(1752110024.758:408): arch=c000003e syscall=46 success=yes exit=22008 a0=3 a1=7ffd7ffa7df0 a2=0 a3=7ffd7ffa7ddc items=0 ppid=3587 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.758000 audit[4134]: NETFILTER_CFG table=filter:107 family=2 entries=42 op=nft_register_chain pid=4134 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:44.758000 audit[4134]: SYSCALL arch=c000003e syscall=46 success=yes exit=22008 a0=3 a1=7ffd7ffa7df0 a2=0 a3=7ffd7ffa7ddc items=0 ppid=3587 pid=4134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.758000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:44.772044 kernel: audit: type=1327 audit(1752110024.758:408): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:44.778866 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:44.829717 kubelet[2299]: I0710 01:13:44.819387 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-4k5ld" podStartSLOduration=43.813848159 podStartE2EDuration="43.813848159s" podCreationTimestamp="2025-07-10 01:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 01:13:44.797146344 +0000 UTC m=+44.665573235" watchObservedRunningTime="2025-07-10 01:13:44.813848159 +0000 UTC m=+44.682275044" Jul 10 01:13:44.847123 env[1363]: time="2025-07-10T01:13:44.846086687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-snhl5,Uid:3459c244-a1ae-43bc-ad86-239a6e665757,Namespace:kube-system,Attempt:1,} returns sandbox id \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:13:44.853341 env[1363]: time="2025-07-10T01:13:44.853308320Z" level=info msg="CreateContainer within sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 10 01:13:44.859787 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8f5594511f5: link becomes ready Jul 10 01:13:44.858563 systemd-networkd[1114]: cali8f5594511f5: Link UP Jul 10 01:13:44.860397 systemd-networkd[1114]: cali8f5594511f5: Gained carrier Jul 10 01:13:44.865000 audit[4159]: NETFILTER_CFG table=filter:108 family=2 entries=17 op=nft_register_rule pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:44.865000 audit[4159]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6d84ddc0 a2=0 a3=7ffd6d84ddac items=0 ppid=2398 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.873916 kernel: audit: type=1325 audit(1752110024.865:409): table=filter:108 family=2 entries=17 op=nft_register_rule pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:44.874034 kernel: audit: type=1300 audit(1752110024.865:409): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffd6d84ddc0 a2=0 a3=7ffd6d84ddac items=0 ppid=2398 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.874064 kernel: audit: type=1327 audit(1752110024.865:409): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:44.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:44.876000 audit[4159]: NETFILTER_CFG table=nat:109 family=2 entries=35 op=nft_register_chain pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:44.880719 kernel: audit: type=1325 audit(1752110024.876:410): table=nat:109 family=2 entries=35 op=nft_register_chain pid=4159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:44.881013 env[1363]: time="2025-07-10T01:13:44.880971341Z" level=info msg="CreateContainer within sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\"" Jul 10 01:13:44.887243 kernel: audit: type=1300 audit(1752110024.876:410): arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd6d84ddc0 a2=0 a3=7ffd6d84ddac items=0 ppid=2398 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.891957 kernel: audit: type=1327 audit(1752110024.876:410): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:44.876000 audit[4159]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffd6d84ddc0 a2=0 a3=7ffd6d84ddac items=0 ppid=2398 pid=4159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.876000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:44.892898 env[1363]: time="2025-07-10T01:13:44.881592210Z" level=info msg="StartContainer for \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\"" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.676 [INFO][4076] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--b48c6-eth0 csi-node-driver- calico-system c15a8f19-7056-4133-9713-c590210e2422 951 0 2025-07-10 01:13:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-b48c6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8f5594511f5 [] [] }} ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.676 [INFO][4076] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.713 [INFO][4097] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" HandleID="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.714 [INFO][4097] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" HandleID="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003254a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-b48c6", "timestamp":"2025-07-10 01:13:44.713953723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.714 [INFO][4097] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.720 [INFO][4097] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.720 [INFO][4097] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.798 [INFO][4097] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.806 [INFO][4097] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.830 [INFO][4097] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.835 [INFO][4097] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.839 [INFO][4097] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.839 [INFO][4097] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.841 [INFO][4097] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4 Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.844 [INFO][4097] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.851 [INFO][4097] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.851 [INFO][4097] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" host="localhost" Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.851 [INFO][4097] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:44.892898 env[1363]: 2025-07-10 01:13:44.851 [INFO][4097] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" HandleID="k8s-pod-network.14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.894424 env[1363]: 2025-07-10 01:13:44.853 [INFO][4076] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b48c6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c15a8f19-7056-4133-9713-c590210e2422", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-b48c6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5594511f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:44.894424 env[1363]: 2025-07-10 01:13:44.853 [INFO][4076] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.894424 env[1363]: 2025-07-10 01:13:44.853 [INFO][4076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f5594511f5 ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.894424 env[1363]: 2025-07-10 01:13:44.859 [INFO][4076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.894424 env[1363]: 2025-07-10 01:13:44.876 [INFO][4076] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b48c6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c15a8f19-7056-4133-9713-c590210e2422", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4", Pod:"csi-node-driver-b48c6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5594511f5", MAC:"36:6e:48:c2:f4:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:44.894424 env[1363]: 2025-07-10 01:13:44.888 [INFO][4076] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4" Namespace="calico-system" Pod="csi-node-driver-b48c6" WorkloadEndpoint="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:13:44.908000 audit[4193]: NETFILTER_CFG table=filter:110 family=2 entries=40 op=nft_register_chain pid=4193 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:44.915410 kernel: audit: type=1325 audit(1752110024.908:411): table=filter:110 family=2 entries=40 op=nft_register_chain pid=4193 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:44.908000 audit[4193]: SYSCALL arch=c000003e syscall=46 success=yes exit=20748 a0=3 a1=7fff4a5b1bd0 a2=0 a3=7fff4a5b1bbc items=0 ppid=3587 pid=4193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.908000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:44.921956 env[1363]: time="2025-07-10T01:13:44.919912966Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:44.921956 env[1363]: time="2025-07-10T01:13:44.919940561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:44.921956 env[1363]: time="2025-07-10T01:13:44.919947784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:44.921956 env[1363]: time="2025-07-10T01:13:44.920044440Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4 pid=4191 runtime=io.containerd.runc.v2 Jul 10 01:13:44.923000 audit[4216]: NETFILTER_CFG table=filter:111 family=2 entries=14 op=nft_register_rule pid=4216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:44.923000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffde23edb90 a2=0 a3=7ffde23edb7c items=0 ppid=2398 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.923000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:44.929000 audit[4216]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4216 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:44.929000 audit[4216]: SYSCALL arch=c000003e syscall=46 success=yes exit=5772 a0=3 a1=7ffde23edb90 a2=0 a3=7ffde23edb7c items=0 ppid=2398 pid=4216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:44.929000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:44.935368 env[1363]: time="2025-07-10T01:13:44.935316365Z" level=info msg="StartContainer for \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\" returns successfully" Jul 10 01:13:44.944793 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:44.959197 env[1363]: time="2025-07-10T01:13:44.954970508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-b48c6,Uid:c15a8f19-7056-4133-9713-c590210e2422,Namespace:calico-system,Attempt:1,} returns sandbox id \"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4\"" Jul 10 01:13:45.350809 systemd-networkd[1114]: cali7006602a141: Gained IPv6LL Jul 10 01:13:45.498157 env[1363]: time="2025-07-10T01:13:45.498125525Z" level=info msg="StopPodSandbox for \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\"" Jul 10 01:13:45.499501 env[1363]: time="2025-07-10T01:13:45.498374182Z" level=info msg="StopPodSandbox for \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\"" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.539 [INFO][4267] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.539 [INFO][4267] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" iface="eth0" netns="/var/run/netns/cni-79f8129d-243d-95c7-5742-b2f2b0c85ae0" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.539 [INFO][4267] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" iface="eth0" netns="/var/run/netns/cni-79f8129d-243d-95c7-5742-b2f2b0c85ae0" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.539 [INFO][4267] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" iface="eth0" netns="/var/run/netns/cni-79f8129d-243d-95c7-5742-b2f2b0c85ae0" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.539 [INFO][4267] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.539 [INFO][4267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.567 [INFO][4280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.567 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.567 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.574 [WARNING][4280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.574 [INFO][4280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.575 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:45.580409 env[1363]: 2025-07-10 01:13:45.578 [INFO][4267] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:13:45.581839 env[1363]: time="2025-07-10T01:13:45.581152133Z" level=info msg="TearDown network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\" successfully" Jul 10 01:13:45.581839 env[1363]: time="2025-07-10T01:13:45.581173521Z" level=info msg="StopPodSandbox for \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\" returns successfully" Jul 10 01:13:45.582595 env[1363]: time="2025-07-10T01:13:45.582576426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-w2f48,Uid:8e8146e9-6407-49b7-8cef-e26dac385734,Namespace:calico-apiserver,Attempt:1,}" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.547 [INFO][4268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.547 [INFO][4268] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" iface="eth0" netns="/var/run/netns/cni-e90f30c7-8ac0-c814-d0a0-135f0578a0bf" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.547 [INFO][4268] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" iface="eth0" netns="/var/run/netns/cni-e90f30c7-8ac0-c814-d0a0-135f0578a0bf" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.547 [INFO][4268] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" iface="eth0" netns="/var/run/netns/cni-e90f30c7-8ac0-c814-d0a0-135f0578a0bf" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.547 [INFO][4268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.547 [INFO][4268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.598 [INFO][4286] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.598 [INFO][4286] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.598 [INFO][4286] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.604 [WARNING][4286] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.604 [INFO][4286] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.606 [INFO][4286] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:45.610565 env[1363]: 2025-07-10 01:13:45.608 [INFO][4268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:13:45.611019 env[1363]: time="2025-07-10T01:13:45.610996312Z" level=info msg="TearDown network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\" successfully" Jul 10 01:13:45.612248 env[1363]: time="2025-07-10T01:13:45.611082363Z" level=info msg="StopPodSandbox for \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\" returns successfully" Jul 10 01:13:45.617427 env[1363]: time="2025-07-10T01:13:45.617397151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5477ff879d-j2p5q,Uid:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc,Namespace:calico-system,Attempt:1,}" Jul 10 01:13:45.625409 systemd[1]: run-netns-cni\x2d79f8129d\x2d243d\x2d95c7\x2d5742\x2db2f2b0c85ae0.mount: Deactivated successfully. Jul 10 01:13:45.625485 systemd[1]: run-netns-cni\x2de90f30c7\x2d8ac0\x2dc814\x2dd0a0\x2d135f0578a0bf.mount: Deactivated successfully. Jul 10 01:13:45.741186 systemd-networkd[1114]: cali96674cf1f80: Link UP Jul 10 01:13:45.743535 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 01:13:45.743723 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali96674cf1f80: link becomes ready Jul 10 01:13:45.743868 systemd-networkd[1114]: cali96674cf1f80: Gained carrier Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.662 [INFO][4294] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0 calico-apiserver-6d44674bc4- calico-apiserver 8e8146e9-6407-49b7-8cef-e26dac385734 972 0 2025-07-10 01:13:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d44674bc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d44674bc4-w2f48 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali96674cf1f80 [] [] }} ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.663 [INFO][4294] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.699 [INFO][4317] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.699 [INFO][4317] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c8ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d44674bc4-w2f48", "timestamp":"2025-07-10 01:13:45.69952697 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.699 [INFO][4317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.699 [INFO][4317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.699 [INFO][4317] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.708 [INFO][4317] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.711 [INFO][4317] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.714 [INFO][4317] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.716 [INFO][4317] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.719 [INFO][4317] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.719 [INFO][4317] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.720 [INFO][4317] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3 Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.723 [INFO][4317] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.736 [INFO][4317] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.736 [INFO][4317] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" host="localhost" Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.736 [INFO][4317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:45.756956 env[1363]: 2025-07-10 01:13:45.736 [INFO][4317] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.757433 env[1363]: 2025-07-10 01:13:45.738 [INFO][4294] cni-plugin/k8s.go 418: Populated endpoint ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e8146e9-6407-49b7-8cef-e26dac385734", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d44674bc4-w2f48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96674cf1f80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:45.757433 env[1363]: 2025-07-10 01:13:45.738 [INFO][4294] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.757433 env[1363]: 2025-07-10 01:13:45.738 [INFO][4294] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96674cf1f80 ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.757433 env[1363]: 2025-07-10 01:13:45.743 [INFO][4294] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.757433 env[1363]: 2025-07-10 01:13:45.744 [INFO][4294] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e8146e9-6407-49b7-8cef-e26dac385734", ResourceVersion:"972", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3", Pod:"calico-apiserver-6d44674bc4-w2f48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96674cf1f80", MAC:"be:3c:d8:1f:6f:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:45.757433 env[1363]: 2025-07-10 01:13:45.754 [INFO][4294] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-w2f48" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:13:45.768927 env[1363]: time="2025-07-10T01:13:45.768790316Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:45.768927 env[1363]: time="2025-07-10T01:13:45.768829903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:45.768927 env[1363]: time="2025-07-10T01:13:45.768837511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:45.769166 env[1363]: time="2025-07-10T01:13:45.769092104Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3 pid=4351 runtime=io.containerd.runc.v2 Jul 10 01:13:45.779000 audit[4368]: NETFILTER_CFG table=filter:113 family=2 entries=58 op=nft_register_chain pid=4368 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:45.779000 audit[4368]: SYSCALL arch=c000003e syscall=46 success=yes exit=30568 a0=3 a1=7ffeb7e324b0 a2=0 a3=7ffeb7e3249c items=0 ppid=3587 pid=4368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:45.779000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:45.807297 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:45.846124 env[1363]: time="2025-07-10T01:13:45.846098134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-w2f48,Uid:8e8146e9-6407-49b7-8cef-e26dac385734,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:13:45.899751 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calif65d54f8885: link becomes ready Jul 10 01:13:45.900850 systemd-networkd[1114]: calif65d54f8885: Link UP Jul 10 01:13:45.901007 systemd-networkd[1114]: calif65d54f8885: Gained carrier Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.687 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0 calico-kube-controllers-5477ff879d- calico-system 5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc 973 0 2025-07-10 01:13:14 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5477ff879d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5477ff879d-j2p5q eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif65d54f8885 [] [] }} ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.687 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.719 [INFO][4326] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.719 [INFO][4326] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5477ff879d-j2p5q", "timestamp":"2025-07-10 01:13:45.719208499 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.719 [INFO][4326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.736 [INFO][4326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.736 [INFO][4326] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.810 [INFO][4326] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.822 [INFO][4326] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.830 [INFO][4326] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.831 [INFO][4326] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.833 [INFO][4326] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.833 [INFO][4326] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.847 [INFO][4326] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.851 [INFO][4326] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.893 [INFO][4326] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.893 [INFO][4326] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" host="localhost" Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.893 [INFO][4326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:45.915347 env[1363]: 2025-07-10 01:13:45.893 [INFO][4326] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.918919 env[1363]: 2025-07-10 01:13:45.896 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0", GenerateName:"calico-kube-controllers-5477ff879d-", Namespace:"calico-system", SelfLink:"", UID:"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5477ff879d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5477ff879d-j2p5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif65d54f8885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:45.918919 env[1363]: 2025-07-10 01:13:45.896 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.918919 env[1363]: 2025-07-10 01:13:45.896 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif65d54f8885 ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.918919 env[1363]: 2025-07-10 01:13:45.901 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.918919 env[1363]: 2025-07-10 01:13:45.902 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0", GenerateName:"calico-kube-controllers-5477ff879d-", Namespace:"calico-system", SelfLink:"", UID:"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5477ff879d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b", Pod:"calico-kube-controllers-5477ff879d-j2p5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif65d54f8885", MAC:"d2:37:0c:72:83:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:45.918919 env[1363]: 2025-07-10 01:13:45.911 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Namespace="calico-system" Pod="calico-kube-controllers-5477ff879d-j2p5q" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:13:45.924000 audit[4396]: NETFILTER_CFG table=filter:114 family=2 entries=48 op=nft_register_chain pid=4396 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:45.924000 audit[4396]: SYSCALL arch=c000003e syscall=46 success=yes exit=23124 a0=3 a1=7fff4d42e640 a2=0 a3=7fff4d42e62c items=0 ppid=3587 pid=4396 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:45.924000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:45.931781 env[1363]: time="2025-07-10T01:13:45.930264057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:45.931781 env[1363]: time="2025-07-10T01:13:45.930344479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:45.931781 env[1363]: time="2025-07-10T01:13:45.930361466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:45.931781 env[1363]: time="2025-07-10T01:13:45.930597515Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b pid=4402 runtime=io.containerd.runc.v2 Jul 10 01:13:45.950785 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:45.982802 env[1363]: time="2025-07-10T01:13:45.982772607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5477ff879d-j2p5q,Uid:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc,Namespace:calico-system,Attempt:1,} returns sandbox id \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:13:46.018738 kubelet[2299]: I0710 01:13:46.018678 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-snhl5" podStartSLOduration=45.018657247 podStartE2EDuration="45.018657247s" podCreationTimestamp="2025-07-10 01:13:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-10 01:13:46.010437598 +0000 UTC m=+45.878864489" watchObservedRunningTime="2025-07-10 01:13:46.018657247 +0000 UTC m=+45.887084134" Jul 10 01:13:46.052000 audit[4434]: NETFILTER_CFG table=filter:115 family=2 entries=14 op=nft_register_rule pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:46.052000 audit[4434]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffed39fc390 a2=0 a3=7ffed39fc37c items=0 ppid=2398 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:46.052000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:46.058000 audit[4434]: NETFILTER_CFG table=nat:116 family=2 entries=44 op=nft_register_rule pid=4434 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:46.058000 audit[4434]: SYSCALL arch=c000003e syscall=46 success=yes exit=14196 a0=3 a1=7ffed39fc390 a2=0 a3=7ffed39fc37c items=0 ppid=2398 pid=4434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:46.058000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:46.118879 systemd-networkd[1114]: calie0bf60675d7: Gained IPv6LL Jul 10 01:13:46.310819 systemd-networkd[1114]: cali8f5594511f5: Gained IPv6LL Jul 10 01:13:46.524225 env[1363]: time="2025-07-10T01:13:46.524193873Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.579 [INFO][4445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.579 [INFO][4445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" iface="eth0" netns="/var/run/netns/cni-377a8694-c707-0d0a-90ef-0efeb4e8c87a" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.579 [INFO][4445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" iface="eth0" netns="/var/run/netns/cni-377a8694-c707-0d0a-90ef-0efeb4e8c87a" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.581 [INFO][4445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" iface="eth0" netns="/var/run/netns/cni-377a8694-c707-0d0a-90ef-0efeb4e8c87a" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.581 [INFO][4445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.581 [INFO][4445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.607 [INFO][4452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.607 [INFO][4452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.607 [INFO][4452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.610 [WARNING][4452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.610 [INFO][4452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.611 [INFO][4452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:46.614327 env[1363]: 2025-07-10 01:13:46.612 [INFO][4445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:13:46.636477 env[1363]: time="2025-07-10T01:13:46.614634380Z" level=info msg="TearDown network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" successfully" Jul 10 01:13:46.636477 env[1363]: time="2025-07-10T01:13:46.614665393Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" returns successfully" Jul 10 01:13:46.623971 systemd[1]: run-netns-cni\x2d377a8694\x2dc707\x2d0d0a\x2d90ef\x2d0efeb4e8c87a.mount: Deactivated successfully. Jul 10 01:13:46.680081 env[1363]: time="2025-07-10T01:13:46.680055468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zxwst,Uid:ced04dc5-79ee-4a07-a568-b0fd4007f64c,Namespace:calico-system,Attempt:1,}" Jul 10 01:13:46.797715 systemd-networkd[1114]: calida2f92a11f8: Link UP Jul 10 01:13:46.799981 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 01:13:46.800500 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calida2f92a11f8: link becomes ready Jul 10 01:13:46.800226 systemd-networkd[1114]: calida2f92a11f8: Gained carrier Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.728 [INFO][4458] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--58fd7646b9--zxwst-eth0 goldmane-58fd7646b9- calico-system ced04dc5-79ee-4a07-a568-b0fd4007f64c 995 0 2025-07-10 01:13:13 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-58fd7646b9-zxwst eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calida2f92a11f8 [] [] }} ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.728 [INFO][4458] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.753 [INFO][4471] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.753 [INFO][4471] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025aff0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-58fd7646b9-zxwst", "timestamp":"2025-07-10 01:13:46.753429668 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.753 [INFO][4471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.753 [INFO][4471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.753 [INFO][4471] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.759 [INFO][4471] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.762 [INFO][4471] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.768 [INFO][4471] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.770 [INFO][4471] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.771 [INFO][4471] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.771 [INFO][4471] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.772 [INFO][4471] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742 Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.781 [INFO][4471] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.792 [INFO][4471] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.793 [INFO][4471] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" host="localhost" Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.793 [INFO][4471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:46.816457 env[1363]: 2025-07-10 01:13:46.793 [INFO][4471] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.817108 env[1363]: 2025-07-10 01:13:46.794 [INFO][4458] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--zxwst-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ced04dc5-79ee-4a07-a568-b0fd4007f64c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-58fd7646b9-zxwst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida2f92a11f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:46.817108 env[1363]: 2025-07-10 01:13:46.794 [INFO][4458] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.817108 env[1363]: 2025-07-10 01:13:46.794 [INFO][4458] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida2f92a11f8 ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.817108 env[1363]: 2025-07-10 01:13:46.800 [INFO][4458] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.817108 env[1363]: 2025-07-10 01:13:46.800 [INFO][4458] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--zxwst-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ced04dc5-79ee-4a07-a568-b0fd4007f64c", ResourceVersion:"995", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742", Pod:"goldmane-58fd7646b9-zxwst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida2f92a11f8", MAC:"56:fe:68:2b:40:36", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:46.817108 env[1363]: 2025-07-10 01:13:46.814 [INFO][4458] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Namespace="calico-system" Pod="goldmane-58fd7646b9-zxwst" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:13:46.853275 env[1363]: time="2025-07-10T01:13:46.853237143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:46.853375 env[1363]: time="2025-07-10T01:13:46.853263436Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:46.853375 env[1363]: time="2025-07-10T01:13:46.853276469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:46.853439 env[1363]: time="2025-07-10T01:13:46.853366382Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742 pid=4495 runtime=io.containerd.runc.v2 Jul 10 01:13:46.861000 audit[4504]: NETFILTER_CFG table=filter:117 family=2 entries=60 op=nft_register_chain pid=4504 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:46.861000 audit[4504]: SYSCALL arch=c000003e syscall=46 success=yes exit=29916 a0=3 a1=7ffc28976230 a2=0 a3=7ffc2897621c items=0 ppid=3587 pid=4504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:46.861000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:46.886877 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:46.912636 env[1363]: time="2025-07-10T01:13:46.912607746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-zxwst,Uid:ced04dc5-79ee-4a07-a568-b0fd4007f64c,Namespace:calico-system,Attempt:1,} returns sandbox id \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:13:47.080000 audit[4529]: NETFILTER_CFG table=filter:118 family=2 entries=14 op=nft_register_rule pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:47.080000 audit[4529]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe30b2d0c0 a2=0 a3=7ffe30b2d0ac items=0 ppid=2398 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:47.080000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:47.206787 systemd-networkd[1114]: cali96674cf1f80: Gained IPv6LL Jul 10 01:13:47.223000 audit[4529]: NETFILTER_CFG table=nat:119 family=2 entries=56 op=nft_register_chain pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:47.223000 audit[4529]: SYSCALL arch=c000003e syscall=46 success=yes exit=19860 a0=3 a1=7ffe30b2d0c0 a2=0 a3=7ffe30b2d0ac items=0 ppid=2398 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:47.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:47.236341 env[1363]: time="2025-07-10T01:13:47.236305116Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:47.241068 env[1363]: time="2025-07-10T01:13:47.241035670Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:47.242527 env[1363]: time="2025-07-10T01:13:47.242505313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/whisker-backend:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:47.243625 env[1363]: time="2025-07-10T01:13:47.243604711Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:47.244764 env[1363]: time="2025-07-10T01:13:47.244735001Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 10 01:13:47.270791 systemd-networkd[1114]: calif65d54f8885: Gained IPv6LL Jul 10 01:13:47.275159 env[1363]: time="2025-07-10T01:13:47.275132200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 10 01:13:47.293980 env[1363]: time="2025-07-10T01:13:47.293951779Z" level=info msg="CreateContainer within sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 10 01:13:47.309283 env[1363]: time="2025-07-10T01:13:47.309250082Z" level=info msg="CreateContainer within sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\"" Jul 10 01:13:47.316632 env[1363]: time="2025-07-10T01:13:47.315815570Z" level=info msg="StartContainer for \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\"" Jul 10 01:13:47.384111 env[1363]: time="2025-07-10T01:13:47.384054398Z" level=info msg="StartContainer for \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" returns successfully" Jul 10 01:13:47.511444 env[1363]: time="2025-07-10T01:13:47.511376503Z" level=info msg="StopPodSandbox for \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\"" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.566 [INFO][4582] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.566 [INFO][4582] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" iface="eth0" netns="/var/run/netns/cni-52827e0f-2254-39d3-2914-c44cc26565ac" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.567 [INFO][4582] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" iface="eth0" netns="/var/run/netns/cni-52827e0f-2254-39d3-2914-c44cc26565ac" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.567 [INFO][4582] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" iface="eth0" netns="/var/run/netns/cni-52827e0f-2254-39d3-2914-c44cc26565ac" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.567 [INFO][4582] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.567 [INFO][4582] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.591 [INFO][4590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.591 [INFO][4590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.592 [INFO][4590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.599 [WARNING][4590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.599 [INFO][4590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.600 [INFO][4590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:47.604122 env[1363]: 2025-07-10 01:13:47.602 [INFO][4582] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:13:47.610458 env[1363]: time="2025-07-10T01:13:47.604556529Z" level=info msg="TearDown network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\" successfully" Jul 10 01:13:47.610458 env[1363]: time="2025-07-10T01:13:47.604582709Z" level=info msg="StopPodSandbox for \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\" returns successfully" Jul 10 01:13:47.610458 env[1363]: time="2025-07-10T01:13:47.606242210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-b2wqb,Uid:74cf1bc5-5d5a-4dc7-850a-71013984af05,Namespace:calico-apiserver,Attempt:1,}" Jul 10 01:13:47.624044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount654958696.mount: Deactivated successfully. Jul 10 01:13:47.624128 systemd[1]: run-netns-cni\x2d52827e0f\x2d2254\x2d39d3\x2d2914\x2dc44cc26565ac.mount: Deactivated successfully. Jul 10 01:13:47.829538 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 10 01:13:47.829622 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali8a6829b181a: link becomes ready Jul 10 01:13:47.828219 systemd-networkd[1114]: cali8a6829b181a: Link UP Jul 10 01:13:47.829679 systemd-networkd[1114]: cali8a6829b181a: Gained carrier Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.763 [INFO][4596] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0 calico-apiserver-6d44674bc4- calico-apiserver 74cf1bc5-5d5a-4dc7-850a-71013984af05 1005 0 2025-07-10 01:13:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d44674bc4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6d44674bc4-b2wqb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8a6829b181a [] [] }} ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.763 [INFO][4596] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.795 [INFO][4610] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.795 [INFO][4610] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6d44674bc4-b2wqb", "timestamp":"2025-07-10 01:13:47.795813343 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.796 [INFO][4610] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.796 [INFO][4610] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.796 [INFO][4610] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.800 [INFO][4610] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.803 [INFO][4610] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.805 [INFO][4610] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.810 [INFO][4610] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.811 [INFO][4610] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.811 [INFO][4610] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.812 [INFO][4610] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856 Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.816 [INFO][4610] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.823 [INFO][4610] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.823 [INFO][4610] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" host="localhost" Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.823 [INFO][4610] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:13:47.840493 env[1363]: 2025-07-10 01:13:47.823 [INFO][4610] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.848453 env[1363]: 2025-07-10 01:13:47.826 [INFO][4596] cni-plugin/k8s.go 418: Populated endpoint ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"74cf1bc5-5d5a-4dc7-850a-71013984af05", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6d44674bc4-b2wqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a6829b181a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:47.848453 env[1363]: 2025-07-10 01:13:47.826 [INFO][4596] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.848453 env[1363]: 2025-07-10 01:13:47.826 [INFO][4596] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8a6829b181a ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.848453 env[1363]: 2025-07-10 01:13:47.830 [INFO][4596] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.848453 env[1363]: 2025-07-10 01:13:47.830 [INFO][4596] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"74cf1bc5-5d5a-4dc7-850a-71013984af05", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856", Pod:"calico-apiserver-6d44674bc4-b2wqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a6829b181a", MAC:"62:7a:99:65:21:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:13:47.848453 env[1363]: 2025-07-10 01:13:47.838 [INFO][4596] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Namespace="calico-apiserver" Pod="calico-apiserver-6d44674bc4-b2wqb" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:13:47.859323 env[1363]: time="2025-07-10T01:13:47.858223642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 10 01:13:47.859323 env[1363]: time="2025-07-10T01:13:47.858336053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 10 01:13:47.859323 env[1363]: time="2025-07-10T01:13:47.858343607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 10 01:13:47.859323 env[1363]: time="2025-07-10T01:13:47.858421097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856 pid=4628 runtime=io.containerd.runc.v2 Jul 10 01:13:47.876000 audit[4648]: NETFILTER_CFG table=filter:120 family=2 entries=63 op=nft_register_chain pid=4648 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:13:47.876000 audit[4648]: SYSCALL arch=c000003e syscall=46 success=yes exit=30664 a0=3 a1=7fff90e5c6a0 a2=0 a3=7fff90e5c68c items=0 ppid=3587 pid=4648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:47.876000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:13:47.898553 systemd-resolved[1303]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 10 01:13:47.927035 env[1363]: time="2025-07-10T01:13:47.927010756Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d44674bc4-b2wqb,Uid:74cf1bc5-5d5a-4dc7-850a-71013984af05,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:13:48.172000 audit[4667]: NETFILTER_CFG table=filter:121 family=2 entries=13 op=nft_register_rule pid=4667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:48.172000 audit[4667]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc6dd9eaf0 a2=0 a3=7ffc6dd9eadc items=0 ppid=2398 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:48.172000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:48.176000 audit[4667]: NETFILTER_CFG table=nat:122 family=2 entries=27 op=nft_register_chain pid=4667 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:48.176000 audit[4667]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7ffc6dd9eaf0 a2=0 a3=7ffc6dd9eadc items=0 ppid=2398 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:48.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:48.615030 systemd-networkd[1114]: calida2f92a11f8: Gained IPv6LL Jul 10 01:13:48.739426 env[1363]: time="2025-07-10T01:13:48.739399323Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:48.744032 env[1363]: time="2025-07-10T01:13:48.743999628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:48.745293 env[1363]: time="2025-07-10T01:13:48.745278614Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:48.746854 env[1363]: time="2025-07-10T01:13:48.746835681Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:48.747197 env[1363]: time="2025-07-10T01:13:48.747175675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 10 01:13:48.779576 env[1363]: time="2025-07-10T01:13:48.779555382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 01:13:48.792963 env[1363]: time="2025-07-10T01:13:48.792937511Z" level=info msg="CreateContainer within sandbox \"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 10 01:13:48.801517 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2580125541.mount: Deactivated successfully. Jul 10 01:13:48.809966 env[1363]: time="2025-07-10T01:13:48.809943514Z" level=info msg="CreateContainer within sandbox \"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"dd94b4e6e0cc553df6c8cee01a8abf7c437d4ca7c7e14aa40ac559c4332cad97\"" Jul 10 01:13:48.813201 env[1363]: time="2025-07-10T01:13:48.813183565Z" level=info msg="StartContainer for \"dd94b4e6e0cc553df6c8cee01a8abf7c437d4ca7c7e14aa40ac559c4332cad97\"" Jul 10 01:13:48.862781 env[1363]: time="2025-07-10T01:13:48.862755706Z" level=info msg="StartContainer for \"dd94b4e6e0cc553df6c8cee01a8abf7c437d4ca7c7e14aa40ac559c4332cad97\" returns successfully" Jul 10 01:13:48.871189 systemd-networkd[1114]: cali8a6829b181a: Gained IPv6LL Jul 10 01:13:52.259323 env[1363]: time="2025-07-10T01:13:52.259136918Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:52.263748 env[1363]: time="2025-07-10T01:13:52.263694815Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:52.283224 env[1363]: time="2025-07-10T01:13:52.282608618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:52.306674 env[1363]: time="2025-07-10T01:13:52.306020737Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:52.306810 env[1363]: time="2025-07-10T01:13:52.306691553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 01:13:52.774780 env[1363]: time="2025-07-10T01:13:52.774719503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 10 01:13:52.852580 env[1363]: time="2025-07-10T01:13:52.852468535Z" level=info msg="CreateContainer within sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 01:13:52.863739 env[1363]: time="2025-07-10T01:13:52.863712199Z" level=info msg="CreateContainer within sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\"" Jul 10 01:13:52.864127 env[1363]: time="2025-07-10T01:13:52.864115826Z" level=info msg="StartContainer for \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\"" Jul 10 01:13:52.875121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount255335461.mount: Deactivated successfully. Jul 10 01:13:52.946470 env[1363]: time="2025-07-10T01:13:52.946437033Z" level=info msg="StartContainer for \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" returns successfully" Jul 10 01:13:53.863040 systemd[1]: run-containerd-runc-k8s.io-915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66-runc.IdSBFY.mount: Deactivated successfully. Jul 10 01:13:54.016078 kubelet[2299]: I0710 01:13:54.011506 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" podStartSLOduration=7.566112313 podStartE2EDuration="12.003345328s" podCreationTimestamp="2025-07-10 01:13:42 +0000 UTC" firstStartedPulling="2025-07-10 01:13:42.8346893 +0000 UTC m=+42.703116179" lastFinishedPulling="2025-07-10 01:13:47.271922315 +0000 UTC m=+47.140349194" observedRunningTime="2025-07-10 01:13:47.970070657 +0000 UTC m=+47.838497548" watchObservedRunningTime="2025-07-10 01:13:54.003345328 +0000 UTC m=+53.871772213" Jul 10 01:13:54.026797 kubelet[2299]: I0710 01:13:54.016290 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" podStartSLOduration=35.289890485 podStartE2EDuration="42.016274855s" podCreationTimestamp="2025-07-10 01:13:12 +0000 UTC" firstStartedPulling="2025-07-10 01:13:45.989696549 +0000 UTC m=+45.858123431" lastFinishedPulling="2025-07-10 01:13:52.716080916 +0000 UTC m=+52.584507801" observedRunningTime="2025-07-10 01:13:53.979233765 +0000 UTC m=+53.847660657" watchObservedRunningTime="2025-07-10 01:13:54.016274855 +0000 UTC m=+53.884701747" Jul 10 01:13:54.092666 kernel: kauditd_printk_skb: 38 callbacks suppressed Jul 10 01:13:54.096753 kernel: audit: type=1325 audit(1752110034.088:424): table=filter:123 family=2 entries=12 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:54.098089 kernel: audit: type=1300 audit(1752110034.088:424): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcdf368a20 a2=0 a3=7ffcdf368a0c items=0 ppid=2398 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:54.098115 kernel: audit: type=1327 audit(1752110034.088:424): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:54.088000 audit[4748]: NETFILTER_CFG table=filter:123 family=2 entries=12 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:54.102088 kernel: audit: type=1325 audit(1752110034.097:425): table=nat:124 family=2 entries=22 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:54.102121 kernel: audit: type=1300 audit(1752110034.097:425): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcdf368a20 a2=0 a3=7ffcdf368a0c items=0 ppid=2398 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:54.088000 audit[4748]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffcdf368a20 a2=0 a3=7ffcdf368a0c items=0 ppid=2398 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:54.106830 kernel: audit: type=1327 audit(1752110034.097:425): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:54.088000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:54.097000 audit[4748]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=4748 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:13:54.097000 audit[4748]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffcdf368a20 a2=0 a3=7ffcdf368a0c items=0 ppid=2398 pid=4748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:13:54.097000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:13:54.922463 kubelet[2299]: I0710 01:13:54.922434 2299 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 01:13:56.292924 env[1363]: time="2025-07-10T01:13:56.292889475Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:56.305909 env[1363]: time="2025-07-10T01:13:56.305883614Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:56.309945 env[1363]: time="2025-07-10T01:13:56.309928015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:56.315091 env[1363]: time="2025-07-10T01:13:56.315069439Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:13:56.315466 env[1363]: time="2025-07-10T01:13:56.315449394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 10 01:13:56.319578 env[1363]: time="2025-07-10T01:13:56.319543621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 10 01:13:56.457846 systemd[1]: run-containerd-runc-k8s.io-dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9-runc.9NhMo7.mount: Deactivated successfully. Jul 10 01:13:56.537857 env[1363]: time="2025-07-10T01:13:56.537829606Z" level=info msg="CreateContainer within sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 10 01:13:56.574767 env[1363]: time="2025-07-10T01:13:56.574694568Z" level=info msg="CreateContainer within sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\"" Jul 10 01:13:56.577135 env[1363]: time="2025-07-10T01:13:56.577106624Z" level=info msg="StartContainer for \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\"" Jul 10 01:13:56.637242 env[1363]: time="2025-07-10T01:13:56.637214996Z" level=info msg="StartContainer for \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" returns successfully" Jul 10 01:13:57.044583 kubelet[2299]: I0710 01:13:57.044548 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podStartSLOduration=32.726387268 podStartE2EDuration="43.044521195s" podCreationTimestamp="2025-07-10 01:13:14 +0000 UTC" firstStartedPulling="2025-07-10 01:13:46.001189554 +0000 UTC m=+45.869616445" lastFinishedPulling="2025-07-10 01:13:56.319323488 +0000 UTC m=+56.187750372" observedRunningTime="2025-07-10 01:13:57.035416945 +0000 UTC m=+56.903843835" watchObservedRunningTime="2025-07-10 01:13:57.044521195 +0000 UTC m=+56.912948086" Jul 10 01:13:59.917369 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952682702.mount: Deactivated successfully. Jul 10 01:14:01.423736 env[1363]: time="2025-07-10T01:14:01.423700901Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:01.463011 env[1363]: time="2025-07-10T01:14:01.436592444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:01.463011 env[1363]: time="2025-07-10T01:14:01.441960115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/goldmane:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:01.463011 env[1363]: time="2025-07-10T01:14:01.445615673Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:01.463011 env[1363]: time="2025-07-10T01:14:01.445796817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 10 01:14:02.000999 kubelet[2299]: E0710 01:14:02.000120 2299 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.043s" Jul 10 01:14:02.057414 env[1363]: time="2025-07-10T01:14:02.057353333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 10 01:14:02.091379 env[1363]: time="2025-07-10T01:14:02.091106008Z" level=info msg="StopPodSandbox for \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\"" Jul 10 01:14:02.419350 env[1363]: time="2025-07-10T01:14:02.418890918Z" level=info msg="CreateContainer within sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 10 01:14:02.455224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569809631.mount: Deactivated successfully. Jul 10 01:14:02.502084 env[1363]: time="2025-07-10T01:14:02.463291121Z" level=info msg="CreateContainer within sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\"" Jul 10 01:14:02.502084 env[1363]: time="2025-07-10T01:14:02.480735368Z" level=info msg="StartContainer for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\"" Jul 10 01:14:02.811296 env[1363]: time="2025-07-10T01:14:02.811265223Z" level=info msg="StartContainer for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" returns successfully" Jul 10 01:14:02.840249 env[1363]: time="2025-07-10T01:14:02.840217102Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:02.846863 env[1363]: time="2025-07-10T01:14:02.846838230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:02.849601 env[1363]: time="2025-07-10T01:14:02.849577310Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:02.853637 env[1363]: time="2025-07-10T01:14:02.853618634Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:02.853911 env[1363]: time="2025-07-10T01:14:02.853895015Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 10 01:14:02.875571 env[1363]: time="2025-07-10T01:14:02.875543976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 10 01:14:02.882681 env[1363]: time="2025-07-10T01:14:02.882654453Z" level=info msg="CreateContainer within sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 10 01:14:02.907564 env[1363]: time="2025-07-10T01:14:02.907534517Z" level=info msg="CreateContainer within sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\"" Jul 10 01:14:02.955003 env[1363]: time="2025-07-10T01:14:02.954971743Z" level=info msg="StartContainer for \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\"" Jul 10 01:14:03.022717 env[1363]: time="2025-07-10T01:14:03.022690312Z" level=info msg="StartContainer for \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" returns successfully" Jul 10 01:14:03.431806 systemd[1]: run-containerd-runc-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-runc.4dsnR0.mount: Deactivated successfully. Jul 10 01:14:05.244014 env[1363]: time="2025-07-10T01:14:05.243949921Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:05.393777 env[1363]: time="2025-07-10T01:14:05.311351043Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:05.393777 env[1363]: time="2025-07-10T01:14:05.329395156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:05.393777 env[1363]: time="2025-07-10T01:14:05.337696448Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 10 01:14:05.393777 env[1363]: time="2025-07-10T01:14:05.338096244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 10 01:14:06.020062 kubelet[2299]: I0710 01:14:05.502777 2299 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:03.645 [WARNING][4850] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" WorkloadEndpoint="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:03.657 [INFO][4850] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:03.657 [INFO][4850] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" iface="eth0" netns="" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:03.657 [INFO][4850] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:03.657 [INFO][4850] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:06.013 [INFO][4932] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:06.036 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:06.036 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:06.166 [WARNING][4932] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:06.166 [INFO][4932] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:06.169 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:14:06.241663 env[1363]: 2025-07-10 01:14:06.200 [INFO][4850] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:06.241663 env[1363]: time="2025-07-10T01:14:06.241570604Z" level=info msg="TearDown network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\" successfully" Jul 10 01:14:06.241663 env[1363]: time="2025-07-10T01:14:06.241591892Z" level=info msg="StopPodSandbox for \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\" returns successfully" Jul 10 01:14:07.181034 env[1363]: time="2025-07-10T01:14:07.181002356Z" level=info msg="RemovePodSandbox for \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\"" Jul 10 01:14:07.186017 env[1363]: time="2025-07-10T01:14:07.181028214Z" level=info msg="Forcibly stopping sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\"" Jul 10 01:14:08.480697 kubelet[2299]: E0710 01:14:08.357596 2299 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="3.257s" Jul 10 01:14:08.622931 env[1363]: time="2025-07-10T01:14:08.622768407Z" level=info msg="CreateContainer within sandbox \"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 10 01:14:08.741339 env[1363]: time="2025-07-10T01:14:08.739814161Z" level=info msg="CreateContainer within sandbox \"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bd5424355462217fb8e5d92d3eb02aad3a167c0f6d5ae601387b84761dcd9005\"" Jul 10 01:14:08.965066 env[1363]: time="2025-07-10T01:14:08.964349068Z" level=info msg="StartContainer for \"bd5424355462217fb8e5d92d3eb02aad3a167c0f6d5ae601387b84761dcd9005\"" Jul 10 01:14:09.079842 env[1363]: time="2025-07-10T01:14:09.079809826Z" level=info msg="StartContainer for \"bd5424355462217fb8e5d92d3eb02aad3a167c0f6d5ae601387b84761dcd9005\" returns successfully" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:08.347 [WARNING][4977] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" WorkloadEndpoint="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:08.356 [INFO][4977] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:08.356 [INFO][4977] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" iface="eth0" netns="" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:08.356 [INFO][4977] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:08.356 [INFO][4977] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:09.224 [INFO][4987] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:09.252 [INFO][4987] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:09.255 [INFO][4987] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:09.689 [WARNING][4987] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:09.691 [INFO][4987] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" HandleID="k8s-pod-network.aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Workload="localhost-k8s-whisker--66c5d4d86b--jc5cs-eth0" Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:09.701 [INFO][4987] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:14:09.788111 env[1363]: 2025-07-10 01:14:09.736 [INFO][4977] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9" Jul 10 01:14:09.887530 env[1363]: time="2025-07-10T01:14:09.788204250Z" level=info msg="TearDown network for sandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\" successfully" Jul 10 01:14:09.887530 env[1363]: time="2025-07-10T01:14:09.816613391Z" level=info msg="RemovePodSandbox \"aea5f2eb698db2d51d4b0d03a6e4b3fb312a638bd55c00688360d004a661efd9\" returns successfully" Jul 10 01:14:11.250000 audit[5048]: NETFILTER_CFG table=filter:125 family=2 entries=12 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:11.675253 kernel: audit: type=1325 audit(1752110051.250:426): table=filter:125 family=2 entries=12 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:11.723742 kernel: audit: type=1300 audit(1752110051.250:426): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffdaaaeaa80 a2=0 a3=7ffdaaaeaa6c items=0 ppid=2398 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:11.723793 kernel: audit: type=1327 audit(1752110051.250:426): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:11.723813 kernel: audit: type=1325 audit(1752110051.263:427): table=nat:126 family=2 entries=22 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:11.723847 kernel: audit: type=1300 audit(1752110051.263:427): arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdaaaeaa80 a2=0 a3=7ffdaaaeaa6c items=0 ppid=2398 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:11.723865 kernel: audit: type=1327 audit(1752110051.263:427): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:11.250000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffdaaaeaa80 a2=0 a3=7ffdaaaeaa6c items=0 ppid=2398 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:11.250000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:11.263000 audit[5048]: NETFILTER_CFG table=nat:126 family=2 entries=22 op=nft_register_rule pid=5048 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:11.263000 audit[5048]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffdaaaeaa80 a2=0 a3=7ffdaaaeaa6c items=0 ppid=2398 pid=5048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:11.263000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:14.252701 env[1363]: time="2025-07-10T01:14:14.252440631Z" level=info msg="StopPodSandbox for \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\"" Jul 10 01:14:14.343831 kubelet[2299]: E0710 01:14:14.330010 2299 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="5.544s" Jul 10 01:14:14.661806 kubelet[2299]: I0710 01:14:14.657238 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" podStartSLOduration=47.702756828 podStartE2EDuration="1m2.622670506s" podCreationTimestamp="2025-07-10 01:13:12 +0000 UTC" firstStartedPulling="2025-07-10 01:13:47.946582746 +0000 UTC m=+47.815009625" lastFinishedPulling="2025-07-10 01:14:02.866496423 +0000 UTC m=+62.734923303" observedRunningTime="2025-07-10 01:14:14.374068023 +0000 UTC m=+74.242494909" watchObservedRunningTime="2025-07-10 01:14:14.622670506 +0000 UTC m=+74.491097392" Jul 10 01:14:14.715204 kubelet[2299]: I0710 01:14:14.715182 2299 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 10 01:14:14.790670 kubelet[2299]: I0710 01:14:14.790360 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-zxwst" podStartSLOduration=46.711814663 podStartE2EDuration="1m1.790346847s" podCreationTimestamp="2025-07-10 01:13:13 +0000 UTC" firstStartedPulling="2025-07-10 01:13:46.91846206 +0000 UTC m=+46.786888940" lastFinishedPulling="2025-07-10 01:14:01.996994229 +0000 UTC m=+61.865421124" observedRunningTime="2025-07-10 01:14:14.765611627 +0000 UTC m=+74.634038520" watchObservedRunningTime="2025-07-10 01:14:14.790346847 +0000 UTC m=+74.658773737" Jul 10 01:14:14.806617 kubelet[2299]: I0710 01:14:14.794365 2299 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 10 01:14:14.868961 kubelet[2299]: I0710 01:14:14.868936 2299 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 10 01:14:17.326000 audit[5082]: NETFILTER_CFG table=filter:127 family=2 entries=11 op=nft_register_rule pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.604526 kernel: audit: type=1325 audit(1752110057.326:428): table=filter:127 family=2 entries=11 op=nft_register_rule pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.620479 kernel: audit: type=1300 audit(1752110057.326:428): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe098ecc50 a2=0 a3=7ffe098ecc3c items=0 ppid=2398 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:17.620545 kernel: audit: type=1327 audit(1752110057.326:428): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:17.620566 kernel: audit: type=1325 audit(1752110057.336:429): table=nat:128 family=2 entries=29 op=nft_register_chain pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.620591 kernel: audit: type=1300 audit(1752110057.336:429): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffe098ecc50 a2=0 a3=7ffe098ecc3c items=0 ppid=2398 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:17.623260 kernel: audit: type=1327 audit(1752110057.336:429): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:17.623300 kernel: audit: type=1325 audit(1752110057.382:430): table=filter:129 family=2 entries=10 op=nft_register_rule pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.623321 kernel: audit: type=1300 audit(1752110057.382:430): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffda975cb40 a2=0 a3=7ffda975cb2c items=0 ppid=2398 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:17.623343 kernel: audit: type=1327 audit(1752110057.382:430): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:17.623360 kernel: audit: type=1325 audit(1752110057.397:431): table=nat:130 family=2 entries=24 op=nft_register_rule pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.326000 audit[5082]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffe098ecc50 a2=0 a3=7ffe098ecc3c items=0 ppid=2398 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:17.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:17.336000 audit[5082]: NETFILTER_CFG table=nat:128 family=2 entries=29 op=nft_register_chain pid=5082 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.336000 audit[5082]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7ffe098ecc50 a2=0 a3=7ffe098ecc3c items=0 ppid=2398 pid=5082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:17.336000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:17.382000 audit[5084]: NETFILTER_CFG table=filter:129 family=2 entries=10 op=nft_register_rule pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.382000 audit[5084]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffda975cb40 a2=0 a3=7ffda975cb2c items=0 ppid=2398 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:17.382000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:17.397000 audit[5084]: NETFILTER_CFG table=nat:130 family=2 entries=24 op=nft_register_rule pid=5084 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:14:17.397000 audit[5084]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffda975cb40 a2=0 a3=7ffda975cb2c items=0 ppid=2398 pid=5084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:17.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:14:19.979769 env[1363]: time="2025-07-10T01:14:19.969657126Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:18.953 [WARNING][5057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b48c6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c15a8f19-7056-4133-9713-c590210e2422", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4", Pod:"csi-node-driver-b48c6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5594511f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:19.055 [INFO][5057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:19.059 [INFO][5057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" iface="eth0" netns="" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:19.071 [INFO][5057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:19.071 [INFO][5057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:20.869 [INFO][5087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:20.908 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:20.916 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:21.545 [WARNING][5087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:21.548 [INFO][5087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:21.569 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:14:21.713947 env[1363]: 2025-07-10 01:14:21.633 [INFO][5057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:14:21.713947 env[1363]: time="2025-07-10T01:14:21.702953699Z" level=info msg="TearDown network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\" successfully" Jul 10 01:14:21.713947 env[1363]: time="2025-07-10T01:14:21.702972977Z" level=info msg="StopPodSandbox for \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\" returns successfully" Jul 10 01:14:32.853055 systemd[1]: Started sshd@7-139.178.70.102:22-139.178.68.195:41592.service. Jul 10 01:14:33.140192 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 10 01:14:33.187269 kernel: audit: type=1130 audit(1752110072.864:432): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.102:22-139.178.68.195:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:14:32.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.102:22-139.178.68.195:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:14:34.099000 audit[5106]: USER_ACCT pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:14:34.209206 kernel: audit: type=1101 audit(1752110074.099:433): pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:14:34.223416 kernel: audit: type=1103 audit(1752110074.167:434): pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:14:34.234106 kernel: audit: type=1006 audit(1752110074.167:435): pid=5106 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jul 10 01:14:34.235910 kernel: audit: type=1300 audit(1752110074.167:435): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8c59cde0 a2=3 a3=0 items=0 ppid=1 pid=5106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:34.235941 kernel: audit: type=1327 audit(1752110074.167:435): proctitle=737368643A20636F7265205B707269765D Jul 10 01:14:34.167000 audit[5106]: CRED_ACQ pid=5106 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:14:34.167000 audit[5106]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd8c59cde0 a2=3 a3=0 items=0 ppid=1 pid=5106 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:14:34.167000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:14:34.296427 sshd[5106]: Accepted publickey for core from 139.178.68.195 port 41592 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:14:34.189498 sshd[5106]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:14:34.480850 systemd-logind[1351]: New session 10 of user core. Jul 10 01:14:34.491576 systemd[1]: Started session-10.scope. Jul 10 01:14:34.515773 kernel: audit: type=1105 audit(1752110074.508:436): pid=5106 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:14:34.508000 audit[5106]: USER_START pid=5106 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:14:34.540000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:14:34.544904 kernel: audit: type=1103 audit(1752110074.540:437): pid=5114 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:15:06.292000 audit[5138]: NETFILTER_CFG table=filter:131 family=2 entries=10 op=nft_register_rule pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:06.413022 kernel: audit: type=1325 audit(1752110106.292:438): table=filter:131 family=2 entries=10 op=nft_register_rule pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:06.421361 kernel: audit: type=1300 audit(1752110106.292:438): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd9e8d1210 a2=0 a3=7ffd9e8d11fc items=0 ppid=2398 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:06.421413 kernel: audit: type=1327 audit(1752110106.292:438): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:06.421432 kernel: audit: type=1325 audit(1752110106.303:439): table=nat:132 family=2 entries=60 op=nft_unregister_chain pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:06.421449 kernel: audit: type=1300 audit(1752110106.303:439): arch=c000003e syscall=46 success=yes exit=16116 a0=3 a1=7ffd9e8d1210 a2=0 a3=7ffd9e8d11fc items=0 ppid=2398 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:06.421465 kernel: audit: type=1327 audit(1752110106.303:439): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:06.292000 audit[5138]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd9e8d1210 a2=0 a3=7ffd9e8d11fc items=0 ppid=2398 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:06.292000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:06.303000 audit[5138]: NETFILTER_CFG table=nat:132 family=2 entries=60 op=nft_unregister_chain pid=5138 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:06.303000 audit[5138]: SYSCALL arch=c000003e syscall=46 success=yes exit=16116 a0=3 a1=7ffd9e8d1210 a2=0 a3=7ffd9e8d11fc items=0 ppid=2398 pid=5138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:06.303000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:07.903000 audit[5141]: NETFILTER_CFG table=filter:133 family=2 entries=11 op=nft_register_rule pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:08.082968 kernel: audit: type=1325 audit(1752110107.903:440): table=filter:133 family=2 entries=11 op=nft_register_rule pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:08.112795 kernel: audit: type=1300 audit(1752110107.903:440): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc1166a660 a2=0 a3=7ffc1166a64c items=0 ppid=2398 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:08.118560 kernel: audit: type=1327 audit(1752110107.903:440): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:08.118607 kernel: audit: type=1325 audit(1752110107.921:441): table=nat:134 family=2 entries=29 op=nft_unregister_chain pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:07.903000 audit[5141]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffc1166a660 a2=0 a3=7ffc1166a64c items=0 ppid=2398 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:07.903000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:07.921000 audit[5141]: NETFILTER_CFG table=nat:134 family=2 entries=29 op=nft_unregister_chain pid=5141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:07.921000 audit[5141]: SYSCALL arch=c000003e syscall=46 success=yes exit=6796 a0=3 a1=7ffc1166a660 a2=0 a3=7ffc1166a64c items=0 ppid=2398 pid=5141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:07.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:21.308000 audit[5158]: NETFILTER_CFG table=filter:135 family=2 entries=13 op=nft_register_rule pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:21.678095 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 10 01:15:21.706467 kernel: audit: type=1325 audit(1752110121.308:442): table=filter:135 family=2 entries=13 op=nft_register_rule pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:21.716235 kernel: audit: type=1300 audit(1752110121.308:442): arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe73e45840 a2=0 a3=7ffe73e4582c items=0 ppid=2398 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:21.722239 kernel: audit: type=1327 audit(1752110121.308:442): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:21.732603 kernel: audit: type=1325 audit(1752110121.335:443): table=nat:136 family=2 entries=27 op=nft_unregister_chain pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:21.732682 kernel: audit: type=1300 audit(1752110121.335:443): arch=c000003e syscall=46 success=yes exit=6028 a0=3 a1=7ffe73e45840 a2=0 a3=7ffe73e4582c items=0 ppid=2398 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:21.732703 kernel: audit: type=1327 audit(1752110121.335:443): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:21.308000 audit[5158]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7ffe73e45840 a2=0 a3=7ffe73e4582c items=0 ppid=2398 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:21.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:21.335000 audit[5158]: NETFILTER_CFG table=nat:136 family=2 entries=27 op=nft_unregister_chain pid=5158 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:21.335000 audit[5158]: SYSCALL arch=c000003e syscall=46 success=yes exit=6028 a0=3 a1=7ffe73e45840 a2=0 a3=7ffe73e4582c items=0 ppid=2398 pid=5158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:21.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:24.974000 audit[5167]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:25.120858 kernel: audit: type=1325 audit(1752110124.974:444): table=filter:137 family=2 entries=17 op=nft_register_rule pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:25.130150 kernel: audit: type=1300 audit(1752110124.974:444): arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffecda539c0 a2=0 a3=7ffecda539ac items=0 ppid=2398 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:25.130199 kernel: audit: type=1327 audit(1752110124.974:444): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:25.132157 kernel: audit: type=1325 audit(1752110124.984:445): table=nat:138 family=2 entries=35 op=nft_unregister_chain pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:24.974000 audit[5167]: SYSCALL arch=c000003e syscall=46 success=yes exit=7480 a0=3 a1=7ffecda539c0 a2=0 a3=7ffecda539ac items=0 ppid=2398 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:24.974000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:24.984000 audit[5167]: NETFILTER_CFG table=nat:138 family=2 entries=35 op=nft_unregister_chain pid=5167 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:24.984000 audit[5167]: SYSCALL arch=c000003e syscall=46 success=yes exit=4236 a0=3 a1=7ffecda539c0 a2=0 a3=7ffecda539ac items=0 ppid=2398 pid=5167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:24.984000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:28.867252 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 10 01:15:28.981449 kernel: audit: type=1325 audit(1752110128.849:446): table=filter:139 family=2 entries=21 op=nft_register_rule pid=5170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:28.992798 kernel: audit: type=1300 audit(1752110128.849:446): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff9166c040 a2=0 a3=7fff9166c02c items=0 ppid=2398 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:28.992857 kernel: audit: type=1327 audit(1752110128.849:446): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:28.992874 kernel: audit: type=1325 audit(1752110128.864:447): table=nat:140 family=2 entries=19 op=nft_unregister_chain pid=5170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:28.994057 kernel: audit: type=1300 audit(1752110128.864:447): arch=c000003e syscall=46 success=yes exit=2956 a0=3 a1=7fff9166c040 a2=0 a3=0 items=0 ppid=2398 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:28.994083 kernel: audit: type=1327 audit(1752110128.864:447): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:28.849000 audit[5170]: NETFILTER_CFG table=filter:139 family=2 entries=21 op=nft_register_rule pid=5170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:28.849000 audit[5170]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7fff9166c040 a2=0 a3=7fff9166c02c items=0 ppid=2398 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:28.849000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:15:28.864000 audit[5170]: NETFILTER_CFG table=nat:140 family=2 entries=19 op=nft_unregister_chain pid=5170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:15:28.864000 audit[5170]: SYSCALL arch=c000003e syscall=46 success=yes exit=2956 a0=3 a1=7fff9166c040 a2=0 a3=0 items=0 ppid=2398 pid=5170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:15:28.864000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:17:09.005632 sshd[5106]: pam_unix(sshd:session): session closed for user core Jul 10 01:17:09.136890 kernel: audit: type=1106 audit(1752110229.090:448): pid=5106 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:09.153260 kernel: audit: type=1104 audit(1752110229.098:449): pid=5106 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:09.153319 kernel: audit: type=1131 audit(1752110229.124:450): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.102:22-139.178.68.195:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:17:09.090000 audit[5106]: USER_END pid=5106 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:09.098000 audit[5106]: CRED_DISP pid=5106 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:09.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-139.178.70.102:22-139.178.68.195:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:17:09.117513 systemd[1]: sshd@7-139.178.70.102:22-139.178.68.195:41592.service: Deactivated successfully. Jul 10 01:17:09.132731 systemd[1]: session-10.scope: Deactivated successfully. Jul 10 01:17:09.133080 systemd-logind[1351]: Session 10 logged out. Waiting for processes to exit. Jul 10 01:17:09.168691 systemd-logind[1351]: Removed session 10. Jul 10 01:17:10.991256 kubelet[2299]: E0710 01:17:10.988919 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-ready"] Jul 10 01:17:11.123606 kubelet[2299]: E0710 01:17:11.123572 2299 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Jul 10 01:17:11.158776 kubelet[2299]: E0710 01:17:11.158744 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jul 10 01:17:11.212753 kubelet[2299]: E0710 01:17:10.986191 2299 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Jul 10 01:17:11.215053 kubelet[2299]: E0710 01:17:11.215035 2299 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:17:11.241741 kubelet[2299]: W0710 01:17:11.095194 2299 reflector.go:484] object-"calico-apiserver"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.251185 kubelet[2299]: E0710 01:17:11.251159 2299 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:17:11.253673 kubelet[2299]: E0710 01:17:11.253660 2299 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:17:11.260278 kubelet[2299]: E0710 01:17:11.260243 2299 kubelet.go:2887] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:17:11.261941 kubelet[2299]: E0710 01:17:11.261926 2299 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:17:11.276524 env[1363]: time="2025-07-10T01:17:11.276389040Z" level=info msg="RemovePodSandbox for \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\"" Jul 10 01:17:11.276524 env[1363]: time="2025-07-10T01:17:11.276418271Z" level=info msg="Forcibly stopping sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\"" Jul 10 01:17:11.292445 env[1363]: time="2025-07-10T01:17:11.291445081Z" level=error msg="get state for dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" error="context canceled: unknown" Jul 10 01:17:11.292445 env[1363]: time="2025-07-10T01:17:11.291465029Z" level=warning msg="unknown status" status=0 Jul 10 01:17:11.293542 env[1363]: time="2025-07-10T01:17:11.293078950Z" level=error msg="ExecSync for \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\" failed" error="failed to exec in container: failed to create exec \"e80b2e361f8fa8df5876a08d0d26719c4c6b5bf665c2e8459d74d36294fa754c\": context canceled: unknown" Jul 10 01:17:11.312604 kubelet[2299]: W0710 01:17:11.106583 2299 reflector.go:484] object-"calico-system"/"whisker-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.320258 kubelet[2299]: W0710 01:17:11.106598 2299 reflector.go:484] object-"calico-system"/"goldmane-key-pair": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.320380 kubelet[2299]: W0710 01:17:11.106613 2299 reflector.go:484] object-"calico-system"/"goldmane-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.320436 kubelet[2299]: W0710 01:17:11.106624 2299 reflector.go:484] object-"calico-system"/"typha-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.322016 kubelet[2299]: W0710 01:17:11.106648 2299 reflector.go:484] object-"calico-system"/"node-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.322016 kubelet[2299]: W0710 01:17:11.106663 2299 reflector.go:484] object-"calico-system"/"cni-config": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.322016 kubelet[2299]: W0710 01:17:11.106672 2299 reflector.go:484] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.322016 kubelet[2299]: W0710 01:17:11.106682 2299 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.322016 kubelet[2299]: W0710 01:17:11.106690 2299 reflector.go:484] object-"tigera-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.322016 kubelet[2299]: W0710 01:17:11.106702 2299 reflector.go:484] object-"calico-system"/"whisker-backend-key-pair": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.322016 kubelet[2299]: W0710 01:17:11.106711 2299 reflector.go:484] object-"calico-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.323988 kubelet[2299]: W0710 01:17:11.106720 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.323988 kubelet[2299]: W0710 01:17:11.106729 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.323988 kubelet[2299]: W0710 01:17:11.106738 2299 reflector.go:484] object-"tigera-operator"/"kubernetes-services-endpoint": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.323988 kubelet[2299]: W0710 01:17:11.106748 2299 reflector.go:484] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.323988 kubelet[2299]: W0710 01:17:11.106755 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.323988 kubelet[2299]: W0710 01:17:11.106765 2299 reflector.go:484] object-"calico-system"/"goldmane": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.323988 kubelet[2299]: W0710 01:17:11.167243 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.334201 kubelet[2299]: W0710 01:17:11.201491 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@7-139.178.70.102:22-139.178.68.195:41592.service": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@7-139.178.70.102:22-139.178.68.195:41592.service: no such file or directory Jul 10 01:17:11.334201 kubelet[2299]: W0710 01:17:11.320580 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@7-139.178.70.102:22-139.178.68.195:41592.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@7-139.178.70.102:22-139.178.68.195:41592.service: no such file or directory Jul 10 01:17:11.334201 kubelet[2299]: W0710 01:17:11.320613 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-10.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-10.scope: no such file or directory Jul 10 01:17:11.334201 kubelet[2299]: W0710 01:17:11.320626 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-10.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-10.scope: no such file or directory Jul 10 01:17:11.339260 kubelet[2299]: W0710 01:17:11.215151 2299 reflector.go:484] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.339260 kubelet[2299]: W0710 01:17:11.215201 2299 reflector.go:484] object-"calico-system"/"tigera-ca-bundle": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.339260 kubelet[2299]: W0710 01:17:11.240859 2299 reflector.go:484] object-"calico-apiserver"/"calico-apiserver-certs": watch of *v1.Secret ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:17:11.871609 kubelet[2299]: I0710 01:17:11.871570 2299 csi_client.go:182] "Error calling CSI NodeGetInfo()" err="rpc error: code = DeadlineExceeded desc = received context error while waiting for new LB policy update: context deadline exceeded" Jul 10 01:17:14.496822 systemd[1]: Started sshd@8-139.178.70.102:22-139.178.68.195:54602.service. Jul 10 01:17:14.810131 kernel: audit: type=1130 audit(1752110234.507:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.102:22-139.178.68.195:54602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:17:14.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.102:22-139.178.68.195:54602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:17:15.807814 kernel: audit: type=1101 audit(1752110235.803:452): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:15.863165 kernel: audit: type=1103 audit(1752110235.833:453): pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:15.866856 kernel: audit: type=1006 audit(1752110235.833:454): pid=5341 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jul 10 01:17:15.871291 kernel: audit: type=1300 audit(1752110235.833:454): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe90b2caa0 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:17:15.878120 kernel: audit: type=1327 audit(1752110235.833:454): proctitle=737368643A20636F7265205B707269765D Jul 10 01:17:15.803000 audit[5341]: USER_ACCT pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:15.833000 audit[5341]: CRED_ACQ pid=5341 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:15.833000 audit[5341]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe90b2caa0 a2=3 a3=0 items=0 ppid=1 pid=5341 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:17:15.833000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:17:16.033577 sshd[5341]: Accepted publickey for core from 139.178.68.195 port 54602 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:17:15.857171 sshd[5341]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:17:16.230467 systemd-logind[1351]: New session 11 of user core. Jul 10 01:17:16.238567 systemd[1]: Started session-11.scope. Jul 10 01:17:16.289729 kernel: audit: type=1105 audit(1752110236.283:455): pid=5341 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:16.283000 audit[5341]: USER_START pid=5341 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:16.368000 audit[5345]: CRED_ACQ pid=5345 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:16.377826 kernel: audit: type=1103 audit(1752110236.368:456): pid=5345 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:17:16.483700 env[1363]: time="2025-07-10T01:17:16.483590648Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:17:16.507200 env[1363]: time="2025-07-10T01:17:16.485494512Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:17:21.486637 env[1363]: time="2025-07-10T01:17:21.470714396Z" level=error msg="ExecSync for \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:22.265 [WARNING][5318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--b48c6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c15a8f19-7056-4133-9713-c590210e2422", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14c2ee1c94348d21a299e1321eccbc9c20d2f650419a9d6496db1ff04cd68bc4", Pod:"csi-node-driver-b48c6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8f5594511f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:22.308 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:22.309 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" iface="eth0" netns="" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:22.311 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:22.311 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:23.312 [INFO][5355] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:23.350 [INFO][5355] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:23.354 [INFO][5355] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:23.703 [WARNING][5355] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:23.703 [INFO][5355] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" HandleID="k8s-pod-network.a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Workload="localhost-k8s-csi--node--driver--b48c6-eth0" Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:23.704 [INFO][5355] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:17:23.868523 env[1363]: 2025-07-10 01:17:23.782 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096" Jul 10 01:17:23.868523 env[1363]: time="2025-07-10T01:17:23.863498009Z" level=info msg="TearDown network for sandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\" successfully" Jul 10 01:17:24.237635 env[1363]: time="2025-07-10T01:17:23.955734756Z" level=info msg="RemovePodSandbox \"a4581136627fe17a05f5104c5de93fa80c47188a9b894aba2bdf6734c99e3096\" returns successfully" Jul 10 01:18:32.906844 sshd[5341]: pam_unix(sshd:session): session closed for user core Jul 10 01:18:33.100750 kernel: audit: type=1106 audit(1752110312.986:457): pid=5341 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:33.104265 kernel: audit: type=1104 audit(1752110312.987:458): pid=5341 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:33.104311 kernel: audit: type=1131 audit(1752110313.005:459): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.102:22-139.178.68.195:54602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:18:32.986000 audit[5341]: USER_END pid=5341 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:32.987000 audit[5341]: CRED_DISP pid=5341 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:33.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-139.178.70.102:22-139.178.68.195:54602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:18:33.001198 systemd[1]: sshd@8-139.178.70.102:22-139.178.68.195:54602.service: Deactivated successfully. Jul 10 01:18:33.010791 systemd[1]: session-11.scope: Deactivated successfully. Jul 10 01:18:33.011548 systemd-logind[1351]: Session 11 logged out. Waiting for processes to exit. Jul 10 01:18:33.042242 systemd-logind[1351]: Removed session 11. Jul 10 01:18:33.389774 kubelet[2299]: W0710 01:18:33.381449 2299 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687": dial tcp 139.178.70.102:6443: i/o timeout Jul 10 01:18:33.401800 kubelet[2299]: W0710 01:18:33.401760 2299 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687": dial tcp 139.178.70.102:6443: i/o timeout Jul 10 01:18:33.546137 kubelet[2299]: W0710 01:18:33.546107 2299 reflector.go:561] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dcni-config&resourceVersion=687": net/http: TLS handshake timeout Jul 10 01:18:33.546244 kubelet[2299]: E0710 01:18:33.546150 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dcni-config&resourceVersion=687\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:18:33.805996 kubelet[2299]: E0710 01:18:33.796966 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jul 10 01:18:33.824779 kubelet[2299]: W0710 01:18:33.824738 2299 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687": net/http: TLS handshake timeout Jul 10 01:18:35.987422 kubelet[2299]: E0710 01:18:35.980231 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-ready"] Jul 10 01:18:36.763711 kubelet[2299]: E0710 01:18:35.952763 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-live"] Jul 10 01:18:37.259021 kubelet[2299]: E0710 01:18:37.255939 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687\": dial tcp 139.178.70.102:6443: i/o timeout" logger="UnhandledError" Jul 10 01:18:37.303825 env[1363]: time="2025-07-10T01:18:37.303787429Z" level=info msg="StopPodSandbox for \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\"" Jul 10 01:18:37.339091 kubelet[2299]: E0710 01:18:37.305578 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687\": dial tcp 139.178.70.102:6443: i/o timeout" logger="UnhandledError" Jul 10 01:18:37.339091 kubelet[2299]: W0710 01:18:37.305668 2299 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: i/o timeout Jul 10 01:18:37.339091 kubelet[2299]: E0710 01:18:37.315062 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: i/o timeout" logger="UnhandledError" Jul 10 01:18:37.524380 kubelet[2299]: E0710 01:18:37.524311 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:18:38.525060 systemd[1]: Started sshd@9-139.178.70.102:22-139.178.68.195:59440.service. Jul 10 01:18:38.705630 kernel: audit: type=1130 audit(1752110318.539:460): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.102:22-139.178.68.195:59440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:18:38.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.102:22-139.178.68.195:59440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:18:39.575000 audit[5466]: USER_ACCT pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:39.654281 kernel: audit: type=1101 audit(1752110319.575:461): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:39.686257 kernel: audit: type=1103 audit(1752110319.587:462): pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:39.686319 kernel: audit: type=1006 audit(1752110319.587:463): pid=5466 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jul 10 01:18:39.686362 kernel: audit: type=1300 audit(1752110319.587:463): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeaca53970 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:18:39.686382 kernel: audit: type=1327 audit(1752110319.587:463): proctitle=737368643A20636F7265205B707269765D Jul 10 01:18:39.587000 audit[5466]: CRED_ACQ pid=5466 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:39.587000 audit[5466]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeaca53970 a2=3 a3=0 items=0 ppid=1 pid=5466 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:18:39.587000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:18:39.609991 sshd[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:18:39.788015 sshd[5466]: Accepted publickey for core from 139.178.68.195 port 59440 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:18:39.888157 systemd-logind[1351]: New session 12 of user core. Jul 10 01:18:39.895611 systemd[1]: Started session-12.scope. Jul 10 01:18:39.913000 audit[5466]: USER_START pid=5466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:39.920668 kernel: audit: type=1105 audit(1752110319.913:464): pid=5466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:39.924000 audit[5476]: CRED_ACQ pid=5476 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:39.928661 kernel: audit: type=1103 audit(1752110319.924:465): pid=5476 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:18:42.954346 env[1363]: time="2025-07-10T01:18:42.911775362Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:44.612 [WARNING][5465] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"74cf1bc5-5d5a-4dc7-850a-71013984af05", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856", Pod:"calico-apiserver-6d44674bc4-b2wqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a6829b181a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:44.660 [INFO][5465] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:44.662 [INFO][5465] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" iface="eth0" netns="" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:44.662 [INFO][5465] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:44.662 [INFO][5465] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:45.680 [INFO][5487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:45.716 [INFO][5487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:45.720 [INFO][5487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:46.084 [WARNING][5487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:46.084 [INFO][5487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:46.096 [INFO][5487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:18:46.307113 env[1363]: 2025-07-10 01:18:46.183 [INFO][5465] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:18:46.307113 env[1363]: time="2025-07-10T01:18:46.294272588Z" level=info msg="TearDown network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\" successfully" Jul 10 01:18:46.307113 env[1363]: time="2025-07-10T01:18:46.294299497Z" level=info msg="StopPodSandbox for \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\" returns successfully" Jul 10 01:18:56.999015 env[1363]: time="2025-07-10T01:18:56.976519658Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:19:01.971558 env[1363]: time="2025-07-10T01:19:01.948379173Z" level=error msg="ExecSync for \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" Jul 10 01:19:15.181452 update_engine[1353]: I0710 01:19:15.161598 1353 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 10 01:19:15.181452 update_engine[1353]: I0710 01:19:15.171172 1353 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.211817 1353 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.235469 1353 omaha_request_params.cc:62] Current group set to lts Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.286705 1353 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.286720 1353 update_attempter.cc:643] Scheduling an action processor start. Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.290074 1353 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.297891 1353 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.307533 1353 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.307547 1353 omaha_request_action.cc:271] Request: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: Jul 10 01:19:15.388362 update_engine[1353]: I0710 01:19:15.307550 1353 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 01:19:15.443297 locksmithd[1428]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 10 01:19:15.445790 update_engine[1353]: I0710 01:19:15.444126 1353 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 01:19:15.445970 update_engine[1353]: E0710 01:19:15.445886 1353 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 01:19:15.445970 update_engine[1353]: I0710 01:19:15.445946 1353 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 10 01:19:26.194351 update_engine[1353]: I0710 01:19:26.103377 1353 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 01:19:26.370784 update_engine[1353]: I0710 01:19:26.209919 1353 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 01:19:26.370784 update_engine[1353]: E0710 01:19:26.217290 1353 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 01:19:26.370784 update_engine[1353]: I0710 01:19:26.221094 1353 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 10 01:19:36.144972 update_engine[1353]: I0710 01:19:36.102710 1353 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 01:19:36.367273 update_engine[1353]: I0710 01:19:36.212158 1353 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 01:19:36.367273 update_engine[1353]: E0710 01:19:36.224942 1353 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 01:19:36.367273 update_engine[1353]: I0710 01:19:36.229811 1353 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 10 01:19:46.147502 update_engine[1353]: I0710 01:19:46.130059 1353 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.166615 1353 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 01:19:46.249614 update_engine[1353]: E0710 01:19:46.171306 1353 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.171404 1353 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.171411 1353 omaha_request_action.cc:621] Omaha request response: Jul 10 01:19:46.249614 update_engine[1353]: E0710 01:19:46.176628 1353 omaha_request_action.cc:640] Omaha request network transfer failed. Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.176714 1353 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.176720 1353 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.176723 1353 update_attempter.cc:306] Processing Done. Jul 10 01:19:46.249614 update_engine[1353]: E0710 01:19:46.178081 1353 update_attempter.cc:619] Update failed. Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.178104 1353 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.178108 1353 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.178110 1353 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.195752 1353 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.201521 1353 omaha_request_action.cc:270] Posting an Omaha request to disabled Jul 10 01:19:46.249614 update_engine[1353]: I0710 01:19:46.201545 1353 omaha_request_action.cc:271] Request: Jul 10 01:19:46.249614 update_engine[1353]: Jul 10 01:19:46.249614 update_engine[1353]: Jul 10 01:19:46.252787 update_engine[1353]: Jul 10 01:19:46.252787 update_engine[1353]: Jul 10 01:19:46.252787 update_engine[1353]: Jul 10 01:19:46.252787 update_engine[1353]: Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201551 1353 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201725 1353 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 10 01:19:46.252787 update_engine[1353]: E0710 01:19:46.201790 1353 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201850 1353 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201857 1353 omaha_request_action.cc:621] Omaha request response: Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201862 1353 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201864 1353 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201866 1353 update_attempter.cc:306] Processing Done. Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.201867 1353 update_attempter.cc:310] Error event sent. Jul 10 01:19:46.252787 update_engine[1353]: I0710 01:19:46.203997 1353 update_check_scheduler.cc:74] Next update check in 41m13s Jul 10 01:19:46.273700 locksmithd[1428]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 10 01:19:46.276761 locksmithd[1428]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 10 01:19:50.554710 sshd[5466]: pam_unix(sshd:session): session closed for user core Jul 10 01:19:50.660089 kernel: audit: type=1106 audit(1752110390.604:466): pid=5466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:50.666829 kernel: audit: type=1104 audit(1752110390.604:467): pid=5466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:50.666887 kernel: audit: type=1131 audit(1752110390.617:468): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.102:22-139.178.68.195:59440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:19:50.604000 audit[5466]: USER_END pid=5466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:50.604000 audit[5466]: CRED_DISP pid=5466 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:50.617000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-139.178.70.102:22-139.178.68.195:59440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:19:50.614252 systemd[1]: sshd@9-139.178.70.102:22-139.178.68.195:59440.service: Deactivated successfully. Jul 10 01:19:50.622417 systemd[1]: session-12.scope: Deactivated successfully. Jul 10 01:19:50.622755 systemd-logind[1351]: Session 12 logged out. Waiting for processes to exit. Jul 10 01:19:50.640355 systemd-logind[1351]: Removed session 12. Jul 10 01:19:51.630677 kubelet[2299]: W0710 01:19:51.628651 2299 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": http2: client connection lost Jul 10 01:19:51.957526 kubelet[2299]: E0710 01:19:51.599284 2299 controller.go:195] "Failed to update lease" err="context deadline exceeded" Jul 10 01:19:52.029509 kubelet[2299]: E0710 01:19:52.029456 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": http2: client connection lost" logger="UnhandledError" Jul 10 01:19:52.077903 kubelet[2299]: E0710 01:19:52.076819 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 10s exceeded: context deadline exceeded" containerID="dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jul 10 01:19:52.088526 kubelet[2299]: I0710 01:19:51.839304 2299 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T01:18:36Z","lastTransitionTime":"2025-07-10T01:18:36Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 4m28.56049603s ago; threshold is 3m0s]"} Jul 10 01:19:52.336066 kubelet[2299]: E0710 01:19:52.336030 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-live"] Jul 10 01:19:52.338492 kubelet[2299]: E0710 01:19:52.338468 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-ready"] Jul 10 01:19:52.371249 env[1363]: time="2025-07-10T01:19:52.370359402Z" level=info msg="RemovePodSandbox for \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\"" Jul 10 01:19:52.371249 env[1363]: time="2025-07-10T01:19:52.370385305Z" level=info msg="Forcibly stopping sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\"" Jul 10 01:19:52.421850 systemd[1]: run-containerd-runc-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-runc.Sr1eJ7.mount: Deactivated successfully. Jul 10 01:19:52.426819 systemd[1]: run-containerd-runc-k8s.io-f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c-runc.ur08iM.mount: Deactivated successfully. Jul 10 01:19:52.564649 systemd[1]: run-containerd-runc-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-runc.GRyC6c.mount: Deactivated successfully. Jul 10 01:19:55.673963 systemd[1]: Started sshd@10-139.178.70.102:22-139.178.68.195:57356.service. Jul 10 01:19:55.787840 kernel: audit: type=1130 audit(1752110395.677:469): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.102:22-139.178.68.195:57356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:19:55.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.102:22-139.178.68.195:57356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:19:56.339000 audit[5670]: USER_ACCT pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:56.473977 kernel: audit: type=1101 audit(1752110396.339:470): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:56.496535 kernel: audit: type=1103 audit(1752110396.377:471): pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:56.496594 kernel: audit: type=1006 audit(1752110396.377:472): pid=5670 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jul 10 01:19:56.499301 kernel: audit: type=1300 audit(1752110396.377:472): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff66c8a0a0 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:19:56.503715 kernel: audit: type=1327 audit(1752110396.377:472): proctitle=737368643A20636F7265205B707269765D Jul 10 01:19:56.377000 audit[5670]: CRED_ACQ pid=5670 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:56.377000 audit[5670]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff66c8a0a0 a2=3 a3=0 items=0 ppid=1 pid=5670 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:19:56.377000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:19:56.556956 sshd[5670]: Accepted publickey for core from 139.178.68.195 port 57356 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:19:56.392440 sshd[5670]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:19:56.705837 systemd-logind[1351]: New session 13 of user core. Jul 10 01:19:56.709931 systemd[1]: Started session-13.scope. Jul 10 01:19:56.776112 kernel: audit: type=1105 audit(1752110396.755:473): pid=5670 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:56.755000 audit[5670]: USER_START pid=5670 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:56.815797 kernel: audit: type=1103 audit(1752110396.811:474): pid=5679 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:19:56.811000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:55.639 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"74cf1bc5-5d5a-4dc7-850a-71013984af05", ResourceVersion:"1095", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856", Pod:"calico-apiserver-6d44674bc4-b2wqb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8a6829b181a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:55.755 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:55.758 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" iface="eth0" netns="" Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:55.760 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:55.760 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:59.062 [INFO][5672] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:59.123 [INFO][5672] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:19:59.123 [INFO][5672] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:20:00.157 [WARNING][5672] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:20:00.161 [INFO][5672] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" HandleID="k8s-pod-network.37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:20:00.169 [INFO][5672] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:20:00.286725 env[1363]: 2025-07-10 01:20:00.213 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4" Jul 10 01:20:00.286725 env[1363]: time="2025-07-10T01:20:00.285213016Z" level=info msg="TearDown network for sandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\" successfully" Jul 10 01:20:00.542533 env[1363]: time="2025-07-10T01:20:00.340820043Z" level=info msg="RemovePodSandbox \"37d614c8da7410e503f5faf653a41dc5309991646c01cee8381d58fe5a81a5a4\" returns successfully" Jul 10 01:20:21.528127 sshd[5670]: pam_unix(sshd:session): session closed for user core Jul 10 01:20:21.700300 kernel: audit: type=1106 audit(1752110421.614:475): pid=5670 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:21.707328 kernel: audit: type=1104 audit(1752110421.614:476): pid=5670 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:21.708468 kernel: audit: type=1131 audit(1752110421.651:477): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.102:22-139.178.68.195:57356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:20:21.614000 audit[5670]: USER_END pid=5670 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:21.614000 audit[5670]: CRED_DISP pid=5670 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:21.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-139.178.70.102:22-139.178.68.195:57356 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:20:21.641958 systemd[1]: sshd@10-139.178.70.102:22-139.178.68.195:57356.service: Deactivated successfully. Jul 10 01:20:21.657058 systemd[1]: session-13.scope: Deactivated successfully. Jul 10 01:20:21.657610 systemd-logind[1351]: Session 13 logged out. Waiting for processes to exit. Jul 10 01:20:21.689606 systemd-logind[1351]: Removed session 13. Jul 10 01:20:22.746137 kubelet[2299]: E0710 01:20:20.259664 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 10 01:20:23.060348 env[1363]: time="2025-07-10T01:20:23.060315290Z" level=info msg="StopPodSandbox for \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\"" Jul 10 01:20:26.751920 kernel: audit: type=1130 audit(1752110426.744:478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.102:22-139.178.68.195:44708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:20:26.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.102:22-139.178.68.195:44708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:20:26.740798 systemd[1]: Started sshd@11-139.178.70.102:22-139.178.68.195:44708.service. Jul 10 01:20:27.496000 audit[5815]: USER_ACCT pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:27.644973 kernel: audit: type=1101 audit(1752110427.496:479): pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:27.662495 kernel: audit: type=1103 audit(1752110427.556:480): pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:27.670927 kernel: audit: type=1006 audit(1752110427.556:481): pid=5815 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jul 10 01:20:27.670971 kernel: audit: type=1300 audit(1752110427.556:481): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceec0d4d0 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:27.673822 kernel: audit: type=1327 audit(1752110427.556:481): proctitle=737368643A20636F7265205B707269765D Jul 10 01:20:27.556000 audit[5815]: CRED_ACQ pid=5815 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:27.556000 audit[5815]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffceec0d4d0 a2=3 a3=0 items=0 ppid=1 pid=5815 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:27.556000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:20:27.750801 sshd[5815]: Accepted publickey for core from 139.178.68.195 port 44708 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:20:27.575702 sshd[5815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:20:27.890133 systemd-logind[1351]: New session 14 of user core. Jul 10 01:20:27.892421 systemd[1]: Started session-14.scope. Jul 10 01:20:27.976313 kernel: audit: type=1105 audit(1752110427.941:482): pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:27.941000 audit[5815]: USER_START pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:28.043000 audit[5818]: CRED_ACQ pid=5818 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:28.054734 kernel: audit: type=1103 audit(1752110428.043:483): pid=5818 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:20:28.319721 env[1363]: time="2025-07-10T01:20:28.319547353Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:32.088 [WARNING][5777] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e8146e9-6407-49b7-8cef-e26dac385734", ResourceVersion:"1313", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3", Pod:"calico-apiserver-6d44674bc4-w2f48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96674cf1f80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:32.232 [INFO][5777] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:32.235 [INFO][5777] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" iface="eth0" netns="" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:32.237 [INFO][5777] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:32.237 [INFO][5777] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:34.315 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:34.373 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:34.377 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:34.701 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:34.701 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:34.705 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:20:34.797203 env[1363]: 2025-07-10 01:20:34.738 [INFO][5777] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:20:34.797203 env[1363]: time="2025-07-10T01:20:34.792711050Z" level=info msg="TearDown network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\" successfully" Jul 10 01:20:34.797203 env[1363]: time="2025-07-10T01:20:34.792743182Z" level=info msg="StopPodSandbox for \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\" returns successfully" Jul 10 01:20:38.472000 audit[5839]: NETFILTER_CFG table=filter:141 family=2 entries=22 op=nft_register_rule pid=5839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.628275 kernel: audit: type=1325 audit(1752110438.472:484): table=filter:141 family=2 entries=22 op=nft_register_rule pid=5839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.640335 kernel: audit: type=1300 audit(1752110438.472:484): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcd5b90570 a2=0 a3=7ffcd5b9055c items=0 ppid=2398 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:38.640406 kernel: audit: type=1327 audit(1752110438.472:484): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:38.642883 kernel: audit: type=1325 audit(1752110438.482:485): table=nat:142 family=2 entries=12 op=nft_register_rule pid=5839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.642918 kernel: audit: type=1300 audit(1752110438.482:485): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd5b90570 a2=0 a3=0 items=0 ppid=2398 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:38.642945 kernel: audit: type=1327 audit(1752110438.482:485): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:38.642964 kernel: audit: type=1325 audit(1752110438.503:486): table=filter:143 family=2 entries=22 op=nft_register_rule pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.642988 kernel: audit: type=1300 audit(1752110438.503:486): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdde9def80 a2=0 a3=7ffdde9def6c items=0 ppid=2398 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:38.646241 kernel: audit: type=1327 audit(1752110438.503:486): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:38.646274 kernel: audit: type=1325 audit(1752110438.512:487): table=nat:144 family=2 entries=12 op=nft_register_rule pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.472000 audit[5839]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffcd5b90570 a2=0 a3=7ffcd5b9055c items=0 ppid=2398 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:38.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:38.482000 audit[5839]: NETFILTER_CFG table=nat:142 family=2 entries=12 op=nft_register_rule pid=5839 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.482000 audit[5839]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffcd5b90570 a2=0 a3=0 items=0 ppid=2398 pid=5839 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:38.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:38.503000 audit[5841]: NETFILTER_CFG table=filter:143 family=2 entries=22 op=nft_register_rule pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.503000 audit[5841]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffdde9def80 a2=0 a3=7ffdde9def6c items=0 ppid=2398 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:38.503000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:38.512000 audit[5841]: NETFILTER_CFG table=nat:144 family=2 entries=12 op=nft_register_rule pid=5841 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:38.512000 audit[5841]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffdde9def80 a2=0 a3=0 items=0 ppid=2398 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:38.512000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:43.533000 audit[5844]: NETFILTER_CFG table=filter:145 family=2 entries=22 op=nft_register_rule pid=5844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:43.676331 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 10 01:20:43.690379 kernel: audit: type=1325 audit(1752110443.533:488): table=filter:145 family=2 entries=22 op=nft_register_rule pid=5844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:43.697234 kernel: audit: type=1300 audit(1752110443.533:488): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda77fa990 a2=0 a3=7ffda77fa97c items=0 ppid=2398 pid=5844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:43.699635 kernel: audit: type=1327 audit(1752110443.533:488): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:43.699674 kernel: audit: type=1325 audit(1752110443.553:489): table=nat:146 family=2 entries=12 op=nft_register_rule pid=5844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:43.701367 kernel: audit: type=1300 audit(1752110443.553:489): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda77fa990 a2=0 a3=0 items=0 ppid=2398 pid=5844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:43.702822 kernel: audit: type=1327 audit(1752110443.553:489): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:43.533000 audit[5844]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffda77fa990 a2=0 a3=7ffda77fa97c items=0 ppid=2398 pid=5844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:43.533000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:20:43.553000 audit[5844]: NETFILTER_CFG table=nat:146 family=2 entries=12 op=nft_register_rule pid=5844 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:20:43.553000 audit[5844]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffda77fa990 a2=0 a3=0 items=0 ppid=2398 pid=5844 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:20:43.553000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:09.940971 kernel: audit: type=1130 audit(1752110469.935:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.102:22-18.116.239.38:49484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:21:09.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.102:22-18.116.239.38:49484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:21:09.932203 systemd[1]: Started sshd@12-139.178.70.102:22-18.116.239.38:49484.service. Jul 10 01:21:10.344232 sshd[5852]: kex_exchange_identification: Connection closed by remote host Jul 10 01:21:10.344232 sshd[5852]: Connection closed by 18.116.239.38 port 49484 Jul 10 01:21:10.353947 kernel: audit: type=1131 audit(1752110470.345:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.102:22-18.116.239.38:49484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:21:10.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-139.178.70.102:22-18.116.239.38:49484 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:21:10.345064 systemd[1]: sshd@12-139.178.70.102:22-18.116.239.38:49484.service: Deactivated successfully. Jul 10 01:21:18.308000 audit[5856]: NETFILTER_CFG table=filter:147 family=2 entries=22 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:18.460757 kernel: audit: type=1325 audit(1752110478.308:492): table=filter:147 family=2 entries=22 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:18.473519 kernel: audit: type=1300 audit(1752110478.308:492): arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd183bfbb0 a2=0 a3=7ffd183bfb9c items=0 ppid=2398 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:18.473581 kernel: audit: type=1327 audit(1752110478.308:492): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:18.473606 kernel: audit: type=1325 audit(1752110478.316:493): table=nat:148 family=2 entries=12 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:18.473622 kernel: audit: type=1300 audit(1752110478.316:493): arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd183bfbb0 a2=0 a3=0 items=0 ppid=2398 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:18.474778 kernel: audit: type=1327 audit(1752110478.316:493): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:18.308000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=8224 a0=3 a1=7ffd183bfbb0 a2=0 a3=7ffd183bfb9c items=0 ppid=2398 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:18.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:18.316000 audit[5856]: NETFILTER_CFG table=nat:148 family=2 entries=12 op=nft_register_rule pid=5856 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:18.316000 audit[5856]: SYSCALL arch=c000003e syscall=46 success=yes exit=2700 a0=3 a1=7ffd183bfbb0 a2=0 a3=0 items=0 ppid=2398 pid=5856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:18.316000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:21.554000 audit[5858]: NETFILTER_CFG table=filter:149 family=2 entries=19 op=nft_register_rule pid=5858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:21.658100 kernel: audit: type=1325 audit(1752110481.554:494): table=filter:149 family=2 entries=19 op=nft_register_rule pid=5858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:21.663773 kernel: audit: type=1300 audit(1752110481.554:494): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc30277620 a2=0 a3=7ffc3027760c items=0 ppid=2398 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:21.663809 kernel: audit: type=1327 audit(1752110481.554:494): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:21.663829 kernel: audit: type=1325 audit(1752110481.566:495): table=nat:150 family=2 entries=33 op=nft_register_chain pid=5858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:21.554000 audit[5858]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffc30277620 a2=0 a3=7ffc3027760c items=0 ppid=2398 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:21.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:21.566000 audit[5858]: NETFILTER_CFG table=nat:150 family=2 entries=33 op=nft_register_chain pid=5858 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:21.566000 audit[5858]: SYSCALL arch=c000003e syscall=46 success=yes exit=13428 a0=3 a1=7ffc30277620 a2=0 a3=7ffc3027760c items=0 ppid=2398 pid=5858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:21.566000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:25.182838 kernel: kauditd_printk_skb: 2 callbacks suppressed Jul 10 01:21:25.252821 kernel: audit: type=1325 audit(1752110485.168:496): table=filter:151 family=2 entries=16 op=nft_register_rule pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.259397 kernel: audit: type=1300 audit(1752110485.168:496): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe9dc0a160 a2=0 a3=7ffe9dc0a14c items=0 ppid=2398 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:25.260811 kernel: audit: type=1327 audit(1752110485.168:496): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:25.262592 kernel: audit: type=1325 audit(1752110485.196:497): table=nat:152 family=2 entries=18 op=nft_register_rule pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.264656 kernel: audit: type=1300 audit(1752110485.196:497): arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffe9dc0a160 a2=0 a3=0 items=0 ppid=2398 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:25.265539 kernel: audit: type=1327 audit(1752110485.196:497): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:25.265566 kernel: audit: type=1325 audit(1752110485.223:498): table=filter:153 family=2 entries=16 op=nft_register_rule pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.266179 kernel: audit: type=1300 audit(1752110485.223:498): arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca67906f0 a2=0 a3=7ffca67906dc items=0 ppid=2398 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:25.266204 kernel: audit: type=1327 audit(1752110485.223:498): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:25.266221 kernel: audit: type=1325 audit(1752110485.252:499): table=nat:154 family=2 entries=54 op=nft_register_chain pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.168000 audit[5861]: NETFILTER_CFG table=filter:151 family=2 entries=16 op=nft_register_rule pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.168000 audit[5861]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffe9dc0a160 a2=0 a3=7ffe9dc0a14c items=0 ppid=2398 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:25.168000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:25.196000 audit[5861]: NETFILTER_CFG table=nat:152 family=2 entries=18 op=nft_register_rule pid=5861 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.196000 audit[5861]: SYSCALL arch=c000003e syscall=46 success=yes exit=5004 a0=3 a1=7ffe9dc0a160 a2=0 a3=0 items=0 ppid=2398 pid=5861 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:25.196000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:25.223000 audit[5863]: NETFILTER_CFG table=filter:153 family=2 entries=16 op=nft_register_rule pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.223000 audit[5863]: SYSCALL arch=c000003e syscall=46 success=yes exit=5992 a0=3 a1=7ffca67906f0 a2=0 a3=7ffca67906dc items=0 ppid=2398 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:25.223000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:25.252000 audit[5863]: NETFILTER_CFG table=nat:154 family=2 entries=54 op=nft_register_chain pid=5863 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:25.252000 audit[5863]: SYSCALL arch=c000003e syscall=46 success=yes exit=19092 a0=3 a1=7ffca67906f0 a2=0 a3=7ffca67906dc items=0 ppid=2398 pid=5863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:25.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:26.476000 audit[5866]: NETFILTER_CFG table=filter:155 family=2 entries=15 op=nft_register_rule pid=5866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:26.476000 audit[5866]: SYSCALL arch=c000003e syscall=46 success=yes exit=5248 a0=3 a1=7fff23768ca0 a2=0 a3=7fff23768c8c items=0 ppid=2398 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:26.476000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:26.480000 audit[5866]: NETFILTER_CFG table=nat:156 family=2 entries=29 op=nft_register_chain pid=5866 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:26.480000 audit[5866]: SYSCALL arch=c000003e syscall=46 success=yes exit=10468 a0=3 a1=7fff23768ca0 a2=0 a3=7fff23768c8c items=0 ppid=2398 pid=5866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:26.480000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:27.616000 audit[5868]: NETFILTER_CFG table=filter:157 family=2 entries=13 op=nft_register_rule pid=5868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:27.616000 audit[5868]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff2f3c9be0 a2=0 a3=7fff2f3c9bcc items=0 ppid=2398 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:27.616000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:27.626000 audit[5868]: NETFILTER_CFG table=nat:158 family=2 entries=27 op=nft_register_chain pid=5868 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:27.626000 audit[5868]: SYSCALL arch=c000003e syscall=46 success=yes exit=9348 a0=3 a1=7fff2f3c9be0 a2=0 a3=7fff2f3c9bcc items=0 ppid=2398 pid=5868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:27.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:28.898000 audit[5870]: NETFILTER_CFG table=filter:159 family=2 entries=12 op=nft_register_rule pid=5870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:28.898000 audit[5870]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7ffced79be10 a2=0 a3=7ffced79bdfc items=0 ppid=2398 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:28.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:28.904000 audit[5870]: NETFILTER_CFG table=nat:160 family=2 entries=22 op=nft_register_rule pid=5870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:28.904000 audit[5870]: SYSCALL arch=c000003e syscall=46 success=yes exit=6540 a0=3 a1=7ffced79be10 a2=0 a3=7ffced79bdfc items=0 ppid=2398 pid=5870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:28.904000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:45.864709 kernel: kauditd_printk_skb: 20 callbacks suppressed Jul 10 01:21:46.009683 kernel: audit: type=1325 audit(1752110505.840:506): table=filter:161 family=2 entries=11 op=nft_register_rule pid=5896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:46.014365 kernel: audit: type=1300 audit(1752110505.840:506): arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff8498d000 a2=0 a3=7fff8498cfec items=0 ppid=2398 pid=5896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:46.017219 kernel: audit: type=1327 audit(1752110505.840:506): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:46.020057 kernel: audit: type=1325 audit(1752110505.851:507): table=nat:162 family=2 entries=29 op=nft_register_chain pid=5896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:46.020092 kernel: audit: type=1300 audit(1752110505.851:507): arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff8498d000 a2=0 a3=7fff8498cfec items=0 ppid=2398 pid=5896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:46.022195 kernel: audit: type=1327 audit(1752110505.851:507): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:45.840000 audit[5896]: NETFILTER_CFG table=filter:161 family=2 entries=11 op=nft_register_rule pid=5896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:45.840000 audit[5896]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fff8498d000 a2=0 a3=7fff8498cfec items=0 ppid=2398 pid=5896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:45.840000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:45.851000 audit[5896]: NETFILTER_CFG table=nat:162 family=2 entries=29 op=nft_register_chain pid=5896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:21:45.851000 audit[5896]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fff8498d000 a2=0 a3=7fff8498cfec items=0 ppid=2398 pid=5896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:21:45.851000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:21:57.318753 sshd[5815]: pam_unix(sshd:session): session closed for user core Jul 10 01:21:57.553786 kernel: audit: type=1106 audit(1752110517.436:508): pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:21:57.562165 kernel: audit: type=1104 audit(1752110517.436:509): pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:21:57.566718 kernel: audit: type=1131 audit(1752110517.457:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.102:22-139.178.68.195:44708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:21:57.436000 audit[5815]: USER_END pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:21:57.436000 audit[5815]: CRED_DISP pid=5815 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:21:57.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-139.178.70.102:22-139.178.68.195:44708 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:21:57.451966 systemd[1]: sshd@11-139.178.70.102:22-139.178.68.195:44708.service: Deactivated successfully. Jul 10 01:21:57.464957 systemd[1]: session-14.scope: Deactivated successfully. Jul 10 01:21:57.464975 systemd-logind[1351]: Session 14 logged out. Waiting for processes to exit. Jul 10 01:21:57.502269 systemd-logind[1351]: Removed session 14. Jul 10 01:22:02.228795 kubelet[2299]: I0710 01:22:02.219103 2299 ???:1] "http: TLS handshake error from 18.116.239.38:47882: EOF" Jul 10 01:22:02.423721 kernel: audit: type=1130 audit(1752110522.401:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.102:22-139.178.68.195:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:22:02.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.102:22-139.178.68.195:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:22:02.397294 systemd[1]: Started sshd@13-139.178.70.102:22-139.178.68.195:44126.service. Jul 10 01:22:02.660057 kubelet[2299]: W0710 01:22:02.069082 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:22:02.791588 kubelet[2299]: E0710 01:22:02.779303 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-ready"] Jul 10 01:22:02.834000 audit[5902]: USER_ACCT pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:02.840976 kernel: audit: type=1101 audit(1752110522.834:512): pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:02.841045 sshd[5902]: Accepted publickey for core from 139.178.68.195 port 44126 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:22:02.840000 audit[5902]: CRED_ACQ pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:02.846811 kernel: audit: type=1103 audit(1752110522.840:513): pid=5902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:02.846865 kernel: audit: type=1006 audit(1752110522.840:514): pid=5902 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jul 10 01:22:02.840000 audit[5902]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9c591880 a2=3 a3=0 items=0 ppid=1 pid=5902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:22:02.854620 kernel: audit: type=1300 audit(1752110522.840:514): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9c591880 a2=3 a3=0 items=0 ppid=1 pid=5902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:22:02.854688 kernel: audit: type=1327 audit(1752110522.840:514): proctitle=737368643A20636F7265205B707269765D Jul 10 01:22:02.840000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:22:02.854010 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:22:02.862961 kubelet[2299]: E0710 01:22:02.862612 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:18:36Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:18:36Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:18:36Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:18:36Z\\\",\\\"lastTransitionTime\\\":\\\"2025-07-10T01:18:36Z\\\",\\\"message\\\":\\\"[container runtime is down, PLEG is not healthy: pleg was last seen active 4m28.56049603s ago; threshold is 3m0s]\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\\\",\\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"],\\\"sizeBytes\\\":158500025},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\\\",\\\"ghcr.io/flatcar/calico/cni:v3.30.2\\\"],\\\"sizeBytes\\\":71928924},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\\\",\\\"ghcr.io/flatcar/calico/goldmane:v3.30.2\\\"],\\\"sizeBytes\\\":66352154},{\\\"names\\\":[\\\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\\\",\\\"registry.k8s.io/etcd:3.5.15-0\\\"],\\\"sizeBytes\\\":56909194},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\\\",\\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"],\\\"sizeBytes\\\":52769359},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\\\",\\\"ghcr.io/flatcar/calico/apiserver:v3.30.2\\\"],\\\"sizeBytes\\\":48810696},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\\\",\\\"ghcr.io/flatcar/calico/typha:v3.30.2\\\"],\\\"sizeBytes\\\":35233218},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\\\",\\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\\\"],\\\"sizeBytes\\\":33083307},{\\\"names\\\":[\\\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\\\",\\\"registry.k8s.io/kube-proxy:v1.31.10\\\"],\\\"sizeBytes\\\":30382962},{\\\"names\\\":[\\\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\\\",\\\"registry.k8s.io/kube-apiserver:v1.31.10\\\"],\\\"sizeBytes\\\":28074544},{\\\"names\\\":[\\\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\\\",\\\"registry.k8s.io/kube-controller-manager:v1.31.10\\\"],\\\"sizeBytes\\\":26315128},{\\\"names\\\":[\\\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\\\",\\\"quay.io/tigera/operator:v1.38.3\\\"],\\\"sizeBytes\\\":25052538},{\\\"names\\\":[\\\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\\\",\\\"registry.k8s.io/kube-scheduler:v1.31.10\\\"],\\\"sizeBytes\\\":20385523},{\\\"names\\\":[\\\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\\\",\\\"registry.k8s.io/coredns/coredns:v1.11.3\\\"],\\\"sizeBytes\\\":18562039},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\\\",\\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\"],\\\"sizeBytes\\\":16196439},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\\\",\\\"ghcr.io/flatcar/calico/csi:v3.30.2\\\"],\\\"sizeBytes\\\":10251893},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\\\",\\\"ghcr.io/flatcar/calico/whisker:v3.30.2\\\"],\\\"sizeBytes\\\":6153902},{\\\"names\\\":[\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\\\",\\\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\\\"],\\\"sizeBytes\\\":5939619},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\\\",\\\"registry.k8s.io/pause:3.10\\\"],\\\"sizeBytes\\\":320368},{\\\"names\\\":[\\\"registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db\\\",\\\"registry.k8s.io/pause:3.6\\\"],\\\"sizeBytes\\\":301773}]}}\" for node \"localhost\": Patch \"https://139.178.70.102:6443/api/v1/nodes/localhost/status?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 10 01:22:02.876454 kubelet[2299]: W0710 01:22:02.264846 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:22:02.905456 kubelet[2299]: E0710 01:22:02.905427 2299 goroutinemap.go:150] Operation for "/var/lib/kubelet/plugins_registry/csi.tigera.io-reg.sock" failed. No retries permitted until 2025-07-10 01:22:02.695440041 +0000 UTC m=+542.563866929 (durationBeforeRetry 500ms). Error: RegisterPlugin error -- plugin registration failed with err: rpc error: code = DeadlineExceeded desc = received context error while waiting for new LB policy update: context deadline exceeded: rpc error: code = DeadlineExceeded desc = context deadline exceeded Jul 10 01:22:02.923459 kubelet[2299]: W0710 01:22:02.333806 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:22:02.923459 kubelet[2299]: W0710 01:22:02.445368 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:22:02.923459 kubelet[2299]: W0710 01:22:02.624580 2299 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": http2: client connection lost Jul 10 01:22:02.945907 kubelet[2299]: E0710 01:22:02.945875 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 10 01:22:02.958024 systemd-logind[1351]: New session 15 of user core. Jul 10 01:22:02.961152 systemd[1]: Started session-15.scope. Jul 10 01:22:02.969000 audit[5902]: USER_START pid=5902 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:02.974651 kernel: audit: type=1105 audit(1752110522.969:515): pid=5902 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:02.970000 audit[5906]: CRED_ACQ pid=5906 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:02.978660 kernel: audit: type=1103 audit(1752110522.970:516): pid=5906 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:03.012244 env[1363]: time="2025-07-10T01:22:03.012105820Z" level=info msg="RemovePodSandbox for \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\"" Jul 10 01:22:03.012244 env[1363]: time="2025-07-10T01:22:03.012132098Z" level=info msg="Forcibly stopping sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\"" Jul 10 01:22:03.064127 kubelet[2299]: E0710 01:22:03.064089 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": http2: client connection lost" logger="UnhandledError" Jul 10 01:22:08.269940 env[1363]: time="2025-07-10T01:22:08.253416007Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:11.656 [WARNING][5983] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0", GenerateName:"calico-apiserver-6d44674bc4-", Namespace:"calico-apiserver", SelfLink:"", UID:"8e8146e9-6407-49b7-8cef-e26dac385734", ResourceVersion:"1832", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d44674bc4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3", Pod:"calico-apiserver-6d44674bc4-w2f48", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali96674cf1f80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:11.797 [INFO][5983] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:11.804 [INFO][5983] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" iface="eth0" netns="" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:11.811 [INFO][5983] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:11.811 [INFO][5983] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:13.771 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:13.787 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:13.792 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:14.037 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:14.037 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" HandleID="k8s-pod-network.604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:14.040 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:22:14.169318 env[1363]: 2025-07-10 01:22:14.134 [INFO][5983] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943" Jul 10 01:22:14.169318 env[1363]: time="2025-07-10T01:22:14.163324152Z" level=info msg="TearDown network for sandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\" successfully" Jul 10 01:22:14.374405 env[1363]: time="2025-07-10T01:22:14.185702911Z" level=info msg="RemovePodSandbox \"604f17610fdd074a3911c340cf576eef8419f4b26a87fad8a6a4345c0cd39943\" returns successfully" Jul 10 01:22:33.550938 kubelet[2299]: W0710 01:22:02.715801 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@11-139.178.70.102:22-139.178.68.195:44708.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@11-139.178.70.102:22-139.178.68.195:44708.service: no such file or directory Jul 10 01:22:40.216289 sshd[5902]: pam_unix(sshd:session): session closed for user core Jul 10 01:22:40.304262 kernel: audit: type=1106 audit(1752110560.253:517): pid=5902 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:40.306846 kernel: audit: type=1104 audit(1752110560.253:518): pid=5902 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:40.306872 kernel: audit: type=1131 audit(1752110560.291:519): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.102:22-139.178.68.195:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:22:40.253000 audit[5902]: USER_END pid=5902 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:40.253000 audit[5902]: CRED_DISP pid=5902 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:40.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-139.178.70.102:22-139.178.68.195:44126 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:22:40.287287 systemd[1]: sshd@13-139.178.70.102:22-139.178.68.195:44126.service: Deactivated successfully. Jul 10 01:22:40.295083 systemd[1]: session-15.scope: Deactivated successfully. Jul 10 01:22:40.295317 systemd-logind[1351]: Session 15 logged out. Waiting for processes to exit. Jul 10 01:22:40.312633 systemd-logind[1351]: Removed session 15. Jul 10 01:22:44.401876 kubelet[2299]: W0710 01:22:42.567120 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-14.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-14.scope: no such file or directory Jul 10 01:22:44.795238 kubelet[2299]: E0710 01:22:02.757416 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/events\": http2: client connection lost" event="&Event{ObjectMeta:{kube-apiserver-localhost.1850bef8cf18244d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:8acd60714a0f0f6f5e038fa659db2909,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://139.178.70.102:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:17:10.882755661 +0000 UTC m=+250.751182547,LastTimestamp:2025-07-10 01:17:10.882755661 +0000 UTC m=+250.751182547,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:22:44.901224 kubelet[2299]: E0710 01:22:44.899014 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-live"] Jul 10 01:22:45.048427 kubelet[2299]: W0710 01:22:44.811786 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-14.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-14.scope: no such file or directory Jul 10 01:22:45.048538 kubelet[2299]: W0710 01:22:45.048408 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@12-139.178.70.102:22-18.116.239.38:49484.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@12-139.178.70.102:22-18.116.239.38:49484.service: no such file or directory Jul 10 01:22:45.048538 kubelet[2299]: W0710 01:22:45.048464 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@12-139.178.70.102:22-18.116.239.38:49484.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@12-139.178.70.102:22-18.116.239.38:49484.service: no such file or directory Jul 10 01:22:45.065703 env[1363]: time="2025-07-10T01:22:45.065387627Z" level=info msg="StopPodSandbox for \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\"" Jul 10 01:22:45.212809 kubelet[2299]: W0710 01:22:45.212760 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@13-139.178.70.102:22-139.178.68.195:44126.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@13-139.178.70.102:22-139.178.68.195:44126.service: no such file or directory Jul 10 01:22:45.213123 kubelet[2299]: W0710 01:22:45.213106 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@13-139.178.70.102:22-139.178.68.195:44126.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@13-139.178.70.102:22-139.178.68.195:44126.service: no such file or directory Jul 10 01:22:45.213229 kubelet[2299]: W0710 01:22:45.213218 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-15.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-15.scope: no such file or directory Jul 10 01:22:45.219716 kubelet[2299]: W0710 01:22:45.213298 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-15.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-15.scope: no such file or directory Jul 10 01:22:45.267279 systemd[1]: Started sshd@14-139.178.70.102:22-139.178.68.195:47364.service. Jul 10 01:22:45.333242 kernel: audit: type=1130 audit(1752110565.271:520): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.102:22-139.178.68.195:47364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:22:45.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.102:22-139.178.68.195:47364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:22:45.957000 audit[6066]: USER_ACCT pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:46.291212 kernel: audit: type=1101 audit(1752110565.957:521): pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:46.346958 kernel: audit: type=1103 audit(1752110566.043:522): pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:46.347038 kernel: audit: type=1006 audit(1752110566.043:523): pid=6066 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jul 10 01:22:46.360713 kernel: audit: type=1300 audit(1752110566.043:523): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbb354e60 a2=3 a3=0 items=0 ppid=1 pid=6066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:22:46.360755 kernel: audit: type=1327 audit(1752110566.043:523): proctitle=737368643A20636F7265205B707269765D Jul 10 01:22:46.043000 audit[6066]: CRED_ACQ pid=6066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:46.043000 audit[6066]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdbb354e60 a2=3 a3=0 items=0 ppid=1 pid=6066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:22:46.043000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:22:46.481547 sshd[6066]: Accepted publickey for core from 139.178.68.195 port 47364 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:22:46.106622 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:22:46.729966 systemd-logind[1351]: New session 16 of user core. Jul 10 01:22:46.739144 systemd[1]: Started session-16.scope. Jul 10 01:22:46.802000 audit[6066]: USER_START pid=6066 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:46.806668 kernel: audit: type=1105 audit(1752110566.802:524): pid=6066 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:46.879000 audit[6076]: CRED_ACQ pid=6076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:46.891958 kernel: audit: type=1103 audit(1752110566.879:525): pid=6076 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:22:50.367111 env[1363]: time="2025-07-10T01:22:50.343480655Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:52.510 [WARNING][6065] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a29ef6dc-4246-436d-87dd-9c8e96247aeb", ResourceVersion:"1833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319", Pod:"coredns-7c65d6cfc9-4k5ld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7006602a141", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:52.663 [INFO][6065] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:52.665 [INFO][6065] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" iface="eth0" netns="" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:52.666 [INFO][6065] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:52.666 [INFO][6065] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:54.756 [INFO][6087] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:54.798 [INFO][6087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:54.805 [INFO][6087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:55.119 [WARNING][6087] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:55.137 [INFO][6087] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:55.166 [INFO][6087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:22:55.236567 env[1363]: 2025-07-10 01:22:55.200 [INFO][6065] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:22:55.236567 env[1363]: time="2025-07-10T01:22:55.235009416Z" level=info msg="TearDown network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\" successfully" Jul 10 01:22:55.236567 env[1363]: time="2025-07-10T01:22:55.235033856Z" level=info msg="StopPodSandbox for \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\" returns successfully" Jul 10 01:23:20.060418 sshd[6066]: pam_unix(sshd:session): session closed for user core Jul 10 01:23:20.345573 kernel: audit: type=1106 audit(1752110600.202:526): pid=6066 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:20.351669 kernel: audit: type=1104 audit(1752110600.202:527): pid=6066 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:20.351711 kernel: audit: type=1131 audit(1752110600.304:528): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.102:22-139.178.68.195:47364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:20.202000 audit[6066]: USER_END pid=6066 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:20.202000 audit[6066]: CRED_DISP pid=6066 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:20.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-139.178.70.102:22-139.178.68.195:47364 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:20.285026 systemd[1]: sshd@14-139.178.70.102:22-139.178.68.195:47364.service: Deactivated successfully. Jul 10 01:23:20.310354 systemd[1]: session-16.scope: Deactivated successfully. Jul 10 01:23:20.310690 systemd-logind[1351]: Session 16 logged out. Waiting for processes to exit. Jul 10 01:23:20.335864 systemd-logind[1351]: Removed session 16. Jul 10 01:23:24.443984 kubelet[2299]: W0710 01:23:23.692904 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-16.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-16.scope: no such file or directory Jul 10 01:23:24.555356 kubelet[2299]: W0710 01:23:24.551130 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1661": net/http: TLS handshake timeout Jul 10 01:23:24.670191 kubelet[2299]: W0710 01:23:24.665329 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1734": net/http: TLS handshake timeout Jul 10 01:23:24.672610 kubelet[2299]: E0710 01:23:24.670206 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=1734\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:23:24.719747 kubelet[2299]: W0710 01:23:24.718653 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=705": net/http: TLS handshake timeout Jul 10 01:23:24.719747 kubelet[2299]: E0710 01:23:24.718742 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.102:6443/apis/storage.k8s.io/v1/csidrivers?resourceVersion=705\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:23:24.742653 env[1363]: time="2025-07-10T01:23:24.742617441Z" level=info msg="RemovePodSandbox for \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\"" Jul 10 01:23:24.746483 env[1363]: time="2025-07-10T01:23:24.742656694Z" level=info msg="Forcibly stopping sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\"" Jul 10 01:23:24.758221 kubelet[2299]: W0710 01:23:24.758196 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1": net/http: TLS handshake timeout Jul 10 01:23:24.758403 kubelet[2299]: E0710 01:23:24.758391 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=1\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:23:24.769409 kubelet[2299]: E0710 01:23:24.769378 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=1661\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:23:24.782199 kubelet[2299]: E0710 01:23:24.782158 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-ready"] Jul 10 01:23:24.791346 kubelet[2299]: E0710 01:23:24.791292 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 10 01:23:24.821590 kubelet[2299]: E0710 01:23:24.821546 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jul 10 01:23:24.863452 kubelet[2299]: W0710 01:23:24.862989 2299 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": net/http: TLS handshake timeout Jul 10 01:23:24.863452 kubelet[2299]: E0710 01:23:24.863027 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:23:24.891636 kubelet[2299]: E0710 01:23:24.891610 2299 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="8m57.684s" Jul 10 01:23:24.996897 kubelet[2299]: E0710 01:23:24.995203 2299 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 5m8.255111825s ago; threshold is 3m0s" Jul 10 01:23:25.102567 kubelet[2299]: E0710 01:23:25.102535 2299 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 5m8.366832232s ago; threshold is 3m0s" Jul 10 01:23:25.150355 systemd[1]: Started sshd@15-139.178.70.102:22-139.178.68.195:47486.service. Jul 10 01:23:25.195849 kernel: audit: type=1130 audit(1752110605.153:529): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.102:22-139.178.68.195:47486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:25.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.102:22-139.178.68.195:47486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:25.869000 audit[6136]: USER_ACCT pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:26.030207 kernel: audit: type=1101 audit(1752110605.869:530): pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:26.051748 kernel: audit: type=1103 audit(1752110605.971:531): pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:26.051819 kernel: audit: type=1006 audit(1752110605.982:532): pid=6136 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jul 10 01:23:26.051838 kernel: audit: type=1300 audit(1752110605.982:532): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe29bce20 a2=3 a3=0 items=0 ppid=1 pid=6136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:23:26.053362 kernel: audit: type=1327 audit(1752110605.982:532): proctitle=737368643A20636F7265205B707269765D Jul 10 01:23:25.971000 audit[6136]: CRED_ACQ pid=6136 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:25.982000 audit[6136]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffe29bce20 a2=3 a3=0 items=0 ppid=1 pid=6136 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:23:25.982000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:23:26.108707 sshd[6136]: Accepted publickey for core from 139.178.68.195 port 47486 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:23:26.050173 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:23:26.277807 systemd[1]: run-containerd-runc-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-runc.vbvsNj.mount: Deactivated successfully. Jul 10 01:23:26.382719 systemd-logind[1351]: New session 17 of user core. Jul 10 01:23:26.388443 systemd[1]: Started session-17.scope. Jul 10 01:23:26.452000 audit[6136]: USER_START pid=6136 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:26.509265 kernel: audit: type=1105 audit(1752110606.452:533): pid=6136 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:26.554000 audit[6234]: CRED_ACQ pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:26.572675 kernel: audit: type=1103 audit(1752110606.554:534): pid=6234 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:29.228 [WARNING][6131] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"a29ef6dc-4246-436d-87dd-9c8e96247aeb", ResourceVersion:"1833", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319", Pod:"coredns-7c65d6cfc9-4k5ld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7006602a141", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:29.311 [INFO][6131] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:29.313 [INFO][6131] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" iface="eth0" netns="" Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:29.315 [INFO][6131] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:29.315 [INFO][6131] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:31.405 [INFO][6244] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:31.438 [INFO][6244] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:31.440 [INFO][6244] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:31.605 [WARNING][6244] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:31.606 [INFO][6244] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" HandleID="k8s-pod-network.d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:31.607 [INFO][6244] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:23:31.666850 env[1363]: 2025-07-10 01:23:31.630 [INFO][6131] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b" Jul 10 01:23:31.666850 env[1363]: time="2025-07-10T01:23:31.664239496Z" level=info msg="TearDown network for sandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\" successfully" Jul 10 01:23:31.771009 env[1363]: time="2025-07-10T01:23:31.692175377Z" level=info msg="RemovePodSandbox \"d08209f28426fb10a90356c7c2f30ce87cefef9ed58d075482b630394972a62b\" returns successfully" Jul 10 01:23:53.725148 sshd[6136]: pam_unix(sshd:session): session closed for user core Jul 10 01:23:53.861322 kernel: audit: type=1106 audit(1752110633.792:535): pid=6136 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:53.864762 kernel: audit: type=1104 audit(1752110633.792:536): pid=6136 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:53.864792 kernel: audit: type=1131 audit(1752110633.804:537): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.102:22-139.178.68.195:47486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:53.792000 audit[6136]: USER_END pid=6136 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:53.792000 audit[6136]: CRED_DISP pid=6136 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:53.804000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-139.178.70.102:22-139.178.68.195:47486 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:53.802351 systemd[1]: sshd@15-139.178.70.102:22-139.178.68.195:47486.service: Deactivated successfully. Jul 10 01:23:53.812635 systemd[1]: session-17.scope: Deactivated successfully. Jul 10 01:23:53.813170 systemd-logind[1351]: Session 17 logged out. Waiting for processes to exit. Jul 10 01:23:53.833918 systemd-logind[1351]: Removed session 17. Jul 10 01:23:55.899285 kubelet[2299]: I0710 01:23:53.705618 2299 request.go:700] Waited for 6.739933701s, retries: 1, retry-after: 1s - retry-reason: due to retryable error, error: Get "https://139.178.70.102:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1370&timeoutSeconds=444&watch=true": net/http: TLS handshake timeout - request: GET:https://139.178.70.102:6443/api/v1/pods?allowWatchBookmarks=true&fieldSelector=spec.nodeName%3Dlocalhost&resourceVersion=1370&timeoutSeconds=444&watch=true Jul 10 01:23:56.797572 kubelet[2299]: E0710 01:23:55.758305 2299 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 5m8.596703973s ago; threshold is 3m0s" Jul 10 01:23:57.075816 kubelet[2299]: E0710 01:23:57.073185 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 10 01:23:57.454343 kubelet[2299]: E0710 01:23:57.445758 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": context deadline exceeded" Jul 10 01:23:58.104045 kubelet[2299]: E0710 01:23:57.826861 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 5m40.741653012s ago; threshold is 3m0s]" Jul 10 01:23:58.725875 env[1363]: time="2025-07-10T01:23:58.725672836Z" level=info msg="StopPodSandbox for \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\"" Jul 10 01:23:58.818569 kernel: audit: type=1130 audit(1752110638.787:538): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.102:22-139.178.68.195:49440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:58.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.102:22-139.178.68.195:49440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:23:58.786103 systemd[1]: Started sshd@16-139.178.70.102:22-139.178.68.195:49440.service. Jul 10 01:23:59.190000 audit[6264]: USER_ACCT pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:59.243313 kernel: audit: type=1101 audit(1752110639.190:539): pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:59.248472 kernel: audit: type=1103 audit(1752110639.201:540): pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:59.248505 kernel: audit: type=1006 audit(1752110639.201:541): pid=6264 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Jul 10 01:23:59.248523 kernel: audit: type=1300 audit(1752110639.201:541): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd1898ca0 a2=3 a3=0 items=0 ppid=1 pid=6264 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:23:59.250116 kernel: audit: type=1327 audit(1752110639.201:541): proctitle=737368643A20636F7265205B707269765D Jul 10 01:23:59.201000 audit[6264]: CRED_ACQ pid=6264 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:59.201000 audit[6264]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcd1898ca0 a2=3 a3=0 items=0 ppid=1 pid=6264 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:23:59.201000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:23:59.215160 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:23:59.276901 sshd[6264]: Accepted publickey for core from 139.178.68.195 port 49440 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:23:59.328237 systemd-logind[1351]: New session 18 of user core. Jul 10 01:23:59.332611 systemd[1]: Started session-18.scope. Jul 10 01:23:59.346414 kernel: audit: type=1105 audit(1752110639.339:542): pid=6264 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:59.339000 audit[6264]: USER_START pid=6264 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:59.355702 kernel: audit: type=1103 audit(1752110639.351:543): pid=6277 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:23:59.351000 audit[6277]: CRED_ACQ pid=6277 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:00.657 [WARNING][6268] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0", GenerateName:"calico-kube-controllers-5477ff879d-", Namespace:"calico-system", SelfLink:"", UID:"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc", ResourceVersion:"1834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5477ff879d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b", Pod:"calico-kube-controllers-5477ff879d-j2p5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif65d54f8885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:00.736 [INFO][6268] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:00.737 [INFO][6268] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" iface="eth0" netns="" Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:00.740 [INFO][6268] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:00.740 [INFO][6268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:01.683 [INFO][6288] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:01.697 [INFO][6288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:01.699 [INFO][6288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:01.801 [WARNING][6288] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:01.801 [INFO][6288] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:01.802 [INFO][6288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:24:01.857781 env[1363]: 2025-07-10 01:24:01.827 [INFO][6268] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:24:01.857781 env[1363]: time="2025-07-10T01:24:01.856538475Z" level=info msg="TearDown network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\" successfully" Jul 10 01:24:01.857781 env[1363]: time="2025-07-10T01:24:01.856580813Z" level=info msg="StopPodSandbox for \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\" returns successfully" Jul 10 01:24:52.708928 kubelet[2299]: E0710 01:23:56.734941 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{kube-apiserver-localhost.1850bef8cf18244d kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-localhost,UID:8acd60714a0f0f6f5e038fa659db2909,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://139.178.70.102:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:17:10.882755661 +0000 UTC m=+250.751182547,LastTimestamp:2025-07-10 01:17:10.882755661 +0000 UTC m=+250.751182547,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:24:58.183361 kubelet[2299]: I0710 01:23:59.343873 2299 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 10 01:24:58.394126 sshd[6264]: pam_unix(sshd:session): session closed for user core Jul 10 01:24:58.457000 audit[6264]: USER_END pid=6264 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:24:58.486628 kernel: audit: type=1106 audit(1752110698.457:544): pid=6264 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:24:58.489352 kernel: audit: type=1104 audit(1752110698.457:545): pid=6264 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:24:58.489395 kernel: audit: type=1131 audit(1752110698.470:546): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.102:22-139.178.68.195:49440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:24:58.457000 audit[6264]: CRED_DISP pid=6264 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:24:58.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-139.178.70.102:22-139.178.68.195:49440 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:24:58.465967 systemd[1]: sshd@16-139.178.70.102:22-139.178.68.195:49440.service: Deactivated successfully. Jul 10 01:24:58.479259 systemd[1]: session-18.scope: Deactivated successfully. Jul 10 01:24:58.479728 systemd-logind[1351]: Session 18 logged out. Waiting for processes to exit. Jul 10 01:24:58.493161 systemd-logind[1351]: Removed session 18. Jul 10 01:25:00.137510 kubelet[2299]: E0710 01:24:59.643083 2299 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 5m42.208300731s ago; threshold is 3m0s" Jul 10 01:25:00.277282 env[1363]: time="2025-07-10T01:25:00.277120663Z" level=info msg="RemovePodSandbox for \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\"" Jul 10 01:25:00.277282 env[1363]: time="2025-07-10T01:25:00.277159308Z" level=info msg="Forcibly stopping sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\"" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:01.085 [WARNING][6347] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0", GenerateName:"calico-kube-controllers-5477ff879d-", Namespace:"calico-system", SelfLink:"", UID:"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc", ResourceVersion:"1834", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5477ff879d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b", Pod:"calico-kube-controllers-5477ff879d-j2p5q", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif65d54f8885", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:01.197 [INFO][6347] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:01.200 [INFO][6347] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" iface="eth0" netns="" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:01.202 [INFO][6347] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:01.202 [INFO][6347] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:02.072 [INFO][6354] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:02.118 [INFO][6354] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:02.121 [INFO][6354] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:02.395 [WARNING][6354] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:02.413 [INFO][6354] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" HandleID="k8s-pod-network.eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:02.428 [INFO][6354] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:25:02.499816 env[1363]: 2025-07-10 01:25:02.463 [INFO][6347] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6" Jul 10 01:25:02.499816 env[1363]: time="2025-07-10T01:25:02.491323042Z" level=info msg="TearDown network for sandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\" successfully" Jul 10 01:25:02.614292 env[1363]: time="2025-07-10T01:25:02.523380626Z" level=info msg="RemovePodSandbox \"eb3cfca5f1219fd7b024127b39867c352bbcd18cadb3f774b2a9b88ac71868e6\" returns successfully" Jul 10 01:25:03.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.102:22-139.178.68.195:48964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:25:03.496290 kernel: audit: type=1130 audit(1752110703.431:547): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.102:22-139.178.68.195:48964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:25:03.427636 systemd[1]: Started sshd@17-139.178.70.102:22-139.178.68.195:48964.service. Jul 10 01:25:03.768000 audit[6362]: USER_ACCT pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.816006 kernel: audit: type=1101 audit(1752110703.768:548): pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.816835 kernel: audit: type=1103 audit(1752110703.776:549): pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.818776 kernel: audit: type=1006 audit(1752110703.779:550): pid=6362 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Jul 10 01:25:03.818803 kernel: audit: type=1300 audit(1752110703.779:550): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff00f6df0 a2=3 a3=0 items=0 ppid=1 pid=6362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:25:03.818821 kernel: audit: type=1327 audit(1752110703.779:550): proctitle=737368643A20636F7265205B707269765D Jul 10 01:25:03.776000 audit[6362]: CRED_ACQ pid=6362 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.779000 audit[6362]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffff00f6df0 a2=3 a3=0 items=0 ppid=1 pid=6362 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:25:03.779000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:25:03.835462 sshd[6362]: Accepted publickey for core from 139.178.68.195 port 48964 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:25:03.790217 sshd[6362]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:25:03.890108 systemd[1]: Started session-19.scope. Jul 10 01:25:03.895000 audit[6362]: USER_START pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.905029 kernel: audit: type=1105 audit(1752110703.895:551): pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.905064 kernel: audit: type=1103 audit(1752110703.901:552): pid=6367 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.901000 audit[6367]: CRED_ACQ pid=6367 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:03.891530 systemd-logind[1351]: New session 19 of user core. Jul 10 01:25:26.338045 kubelet[2299]: E0710 01:25:03.320651 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m46.210628558s ago; threshold is 3m0s]" Jul 10 01:25:29.482303 sshd[6362]: pam_unix(sshd:session): session closed for user core Jul 10 01:25:29.614696 kernel: audit: type=1106 audit(1752110729.550:553): pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:29.617098 kernel: audit: type=1104 audit(1752110729.550:554): pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:29.617133 kernel: audit: type=1131 audit(1752110729.592:555): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.102:22-139.178.68.195:48964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:25:29.550000 audit[6362]: USER_END pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:29.550000 audit[6362]: CRED_DISP pid=6362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:29.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-139.178.70.102:22-139.178.68.195:48964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:25:29.586607 systemd[1]: sshd@17-139.178.70.102:22-139.178.68.195:48964.service: Deactivated successfully. Jul 10 01:25:29.597050 systemd[1]: session-19.scope: Deactivated successfully. Jul 10 01:25:29.597583 systemd-logind[1351]: Session 19 logged out. Waiting for processes to exit. Jul 10 01:25:29.614885 systemd-logind[1351]: Removed session 19. Jul 10 01:25:31.246617 kubelet[2299]: E0710 01:25:30.742386 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 10 01:25:31.589141 kubelet[2299]: E0710 01:25:31.584086 2299 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Jul 10 01:25:33.753186 kubelet[2299]: E0710 01:25:33.749250 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" interval="200ms" Jul 10 01:25:33.839915 env[1363]: time="2025-07-10T01:25:33.839820210Z" level=info msg="StopPodSandbox for \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\"" Jul 10 01:25:34.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.102:22-139.178.68.195:34432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:25:34.635570 kernel: audit: type=1130 audit(1752110734.553:556): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.102:22-139.178.68.195:34432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:25:34.550217 systemd[1]: Started sshd@18-139.178.70.102:22-139.178.68.195:34432.service. Jul 10 01:25:34.944000 audit[6395]: USER_ACCT pid=6395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:35.030583 kernel: audit: type=1101 audit(1752110734.944:557): pid=6395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:35.048995 kernel: audit: type=1103 audit(1752110734.964:558): pid=6395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:35.055249 kernel: audit: type=1006 audit(1752110734.964:559): pid=6395 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jul 10 01:25:35.059543 kernel: audit: type=1300 audit(1752110734.964:559): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5ff21140 a2=3 a3=0 items=0 ppid=1 pid=6395 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:25:35.059574 kernel: audit: type=1327 audit(1752110734.964:559): proctitle=737368643A20636F7265205B707269765D Jul 10 01:25:34.964000 audit[6395]: CRED_ACQ pid=6395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:34.964000 audit[6395]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd5ff21140 a2=3 a3=0 items=0 ppid=1 pid=6395 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:25:34.964000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:25:35.103710 sshd[6395]: Accepted publickey for core from 139.178.68.195 port 34432 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:25:34.981510 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:25:35.241533 systemd-logind[1351]: New session 20 of user core. Jul 10 01:25:35.305842 kernel: audit: type=1105 audit(1752110735.286:560): pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:35.286000 audit[6395]: USER_START pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:35.374000 audit[6399]: CRED_ACQ pid=6399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:35.246390 systemd[1]: Started session-20.scope. Jul 10 01:25:35.403046 kernel: audit: type=1103 audit(1752110735.374:561): pid=6399 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:36.587 [WARNING][6391] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3459c244-a1ae-43bc-ad86-239a6e665757", ResourceVersion:"1829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714", Pod:"coredns-7c65d6cfc9-snhl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0bf60675d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:36.654 [INFO][6391] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:36.655 [INFO][6391] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" iface="eth0" netns="" Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:36.656 [INFO][6391] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:36.656 [INFO][6391] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:37.223 [INFO][6406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:37.241 [INFO][6406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:37.245 [INFO][6406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:37.501 [WARNING][6406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:37.522 [INFO][6406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:37.542 [INFO][6406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:25:37.607381 env[1363]: 2025-07-10 01:25:37.571 [INFO][6391] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:25:37.607381 env[1363]: time="2025-07-10T01:25:37.600208945Z" level=info msg="TearDown network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\" successfully" Jul 10 01:25:37.607381 env[1363]: time="2025-07-10T01:25:37.600231490Z" level=info msg="StopPodSandbox for \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\" returns successfully" Jul 10 01:26:28.860651 sshd[6395]: pam_unix(sshd:session): session closed for user core Jul 10 01:26:28.996448 kernel: audit: type=1106 audit(1752110788.923:562): pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:29.003699 kernel: audit: type=1104 audit(1752110788.930:563): pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:29.003737 kernel: audit: type=1131 audit(1752110788.943:564): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.102:22-139.178.68.195:34432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:28.923000 audit[6395]: USER_END pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:28.930000 audit[6395]: CRED_DISP pid=6395 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:28.943000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-139.178.70.102:22-139.178.68.195:34432 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:28.939700 systemd[1]: sshd@18-139.178.70.102:22-139.178.68.195:34432.service: Deactivated successfully. Jul 10 01:26:28.945799 systemd[1]: session-20.scope: Deactivated successfully. Jul 10 01:26:28.946153 systemd-logind[1351]: Session 20 logged out. Waiting for processes to exit. Jul 10 01:26:28.977829 systemd-logind[1351]: Removed session 20. Jul 10 01:26:30.690483 kubelet[2299]: W0710 01:26:30.132621 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-20.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-20.scope: no such file or directory Jul 10 01:26:31.992827 kubelet[2299]: E0710 01:26:31.985481 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 10 01:26:32.420613 kubelet[2299]: E0710 01:26:32.420584 2299 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jul 10 01:26:32.588370 kubelet[2299]: E0710 01:26:32.586634 2299 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Jul 10 01:26:32.596081 kubelet[2299]: E0710 01:26:32.310768 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Jul 10 01:26:32.773829 systemd[1]: run-containerd-runc-k8s.io-f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c-runc.7zfdlm.mount: Deactivated successfully. Jul 10 01:26:33.764129 systemd[1]: run-containerd-runc-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-runc.5B9JqD.mount: Deactivated successfully. Jul 10 01:26:33.903442 kernel: audit: type=1130 audit(1752110793.890:565): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.102:22-139.178.68.195:58318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:33.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.102:22-139.178.68.195:58318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:33.885734 systemd[1]: Started sshd@19-139.178.70.102:22-139.178.68.195:58318.service. Jul 10 01:26:34.664000 audit[6546]: USER_ACCT pid=6546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:34.785725 kernel: audit: type=1101 audit(1752110794.664:566): pid=6546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:34.795084 kernel: audit: type=1103 audit(1752110794.755:567): pid=6546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:34.795126 kernel: audit: type=1006 audit(1752110794.760:568): pid=6546 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jul 10 01:26:34.797170 kernel: audit: type=1300 audit(1752110794.760:568): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffddc7d3b0 a2=3 a3=0 items=0 ppid=1 pid=6546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:26:34.798687 kernel: audit: type=1327 audit(1752110794.760:568): proctitle=737368643A20636F7265205B707269765D Jul 10 01:26:34.755000 audit[6546]: CRED_ACQ pid=6546 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:34.760000 audit[6546]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffddc7d3b0 a2=3 a3=0 items=0 ppid=1 pid=6546 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:26:34.760000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:26:34.842102 sshd[6546]: Accepted publickey for core from 139.178.68.195 port 58318 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:26:34.789960 sshd[6546]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:26:35.115191 systemd-logind[1351]: New session 21 of user core. Jul 10 01:26:35.129085 systemd[1]: Started session-21.scope. Jul 10 01:26:35.320960 kernel: audit: type=1105 audit(1752110795.269:569): pid=6546 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:35.269000 audit[6546]: USER_START pid=6546 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:35.415523 kernel: audit: type=1103 audit(1752110795.383:570): pid=6549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:35.383000 audit[6549]: CRED_ACQ pid=6549 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:57.187691 sshd[6546]: pam_unix(sshd:session): session closed for user core Jul 10 01:26:57.304606 kernel: audit: type=1106 audit(1752110817.243:571): pid=6546 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:57.308273 kernel: audit: type=1104 audit(1752110817.243:572): pid=6546 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:57.308305 kernel: audit: type=1131 audit(1752110817.264:573): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.102:22-139.178.68.195:58318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:57.243000 audit[6546]: USER_END pid=6546 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:57.243000 audit[6546]: CRED_DISP pid=6546 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:26:57.264000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-139.178.70.102:22-139.178.68.195:58318 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:57.264736 systemd[1]: Starting systemd-tmpfiles-clean.service... Jul 10 01:26:57.265057 systemd[1]: sshd@19-139.178.70.102:22-139.178.68.195:58318.service: Deactivated successfully. Jul 10 01:26:57.270197 systemd[1]: session-21.scope: Deactivated successfully. Jul 10 01:26:57.270211 systemd-logind[1351]: Session 21 logged out. Waiting for processes to exit. Jul 10 01:26:57.271359 systemd-logind[1351]: Removed session 21. Jul 10 01:26:57.436791 systemd-tmpfiles[6559]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 10 01:26:57.452964 systemd-tmpfiles[6559]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 10 01:26:57.467670 systemd-tmpfiles[6559]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 10 01:26:57.592557 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully. Jul 10 01:26:57.592730 systemd[1]: Finished systemd-tmpfiles-clean.service. Jul 10 01:26:57.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:57.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:57.598201 kernel: audit: type=1130 audit(1752110817.592:574): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:57.598252 kernel: audit: type=1131 audit(1752110817.592:575): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-clean comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:26:57.600498 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully. Jul 10 01:26:58.680812 kubelet[2299]: E0710 01:26:58.378787 2299 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Jul 10 01:27:01.138879 kubelet[2299]: E0710 01:27:01.133796 2299 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m7.414813741s ago; threshold is 3m0s" Jul 10 01:27:01.412524 kubelet[2299]: W0710 01:26:59.547308 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/cpu,cpuacct/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/cpu,cpuacct/system.slice/systemd-tmpfiles-clean.service: no such file or directory Jul 10 01:27:02.239410 systemd[1]: Started sshd@20-139.178.70.102:22-139.178.68.195:46048.service. Jul 10 01:27:02.319165 kernel: audit: type=1130 audit(1752110822.241:576): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.102:22-139.178.68.195:46048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:27:02.241000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.102:22-139.178.68.195:46048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:27:02.583000 audit[6567]: USER_ACCT pid=6567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.607740 kernel: audit: type=1101 audit(1752110822.583:577): pid=6567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.610811 kernel: audit: type=1103 audit(1752110822.589:578): pid=6567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.610839 kernel: audit: type=1006 audit(1752110822.589:579): pid=6567 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jul 10 01:27:02.610863 kernel: audit: type=1300 audit(1752110822.589:579): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0d18270 a2=3 a3=0 items=0 ppid=1 pid=6567 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:27:02.610892 kernel: audit: type=1327 audit(1752110822.589:579): proctitle=737368643A20636F7265205B707269765D Jul 10 01:27:02.589000 audit[6567]: CRED_ACQ pid=6567 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.589000 audit[6567]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffd0d18270 a2=3 a3=0 items=0 ppid=1 pid=6567 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:27:02.589000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:27:02.637401 sshd[6567]: Accepted publickey for core from 139.178.68.195 port 46048 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:27:02.596730 sshd[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:27:02.715273 systemd-logind[1351]: New session 22 of user core. Jul 10 01:27:02.734474 kernel: audit: type=1105 audit(1752110822.725:580): pid=6567 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.734523 kernel: audit: type=1103 audit(1752110822.730:581): pid=6570 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.725000 audit[6567]: USER_START pid=6567 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.730000 audit[6570]: CRED_ACQ pid=6570 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:02.717878 systemd[1]: Started session-22.scope. Jul 10 01:27:35.775082 sshd[6567]: pam_unix(sshd:session): session closed for user core Jul 10 01:27:35.892368 kernel: audit: type=1106 audit(1752110855.828:582): pid=6567 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:35.894585 kernel: audit: type=1104 audit(1752110855.829:583): pid=6567 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:35.895478 kernel: audit: type=1131 audit(1752110855.840:584): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.102:22-139.178.68.195:46048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:27:35.828000 audit[6567]: USER_END pid=6567 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:35.829000 audit[6567]: CRED_DISP pid=6567 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:35.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-139.178.70.102:22-139.178.68.195:46048 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:27:35.838368 systemd[1]: sshd@20-139.178.70.102:22-139.178.68.195:46048.service: Deactivated successfully. Jul 10 01:27:35.847564 systemd[1]: session-22.scope: Deactivated successfully. Jul 10 01:27:35.848167 systemd-logind[1351]: Session 22 logged out. Waiting for processes to exit. Jul 10 01:27:35.867946 systemd-logind[1351]: Removed session 22. Jul 10 01:27:38.853907 kubelet[2299]: W0710 01:27:38.538668 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/blkio/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/blkio/system.slice/systemd-tmpfiles-clean.service: no such file or directory Jul 10 01:27:38.853907 kubelet[2299]: E0710 01:27:38.858042 2299 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 3m10.074946841s ago; threshold is 3m0s" Jul 10 01:27:38.998058 kubelet[2299]: W0710 01:27:38.998009 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/systemd-tmpfiles-clean.service: no such file or directory Jul 10 01:27:38.999256 kubelet[2299]: W0710 01:27:38.998063 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/devices/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/devices/system.slice/systemd-tmpfiles-clean.service: no such file or directory Jul 10 01:27:38.999256 kubelet[2299]: W0710 01:27:38.998085 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/systemd-tmpfiles-clean.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/systemd-tmpfiles-clean.service: no such file or directory Jul 10 01:27:39.342608 env[1363]: time="2025-07-10T01:27:39.342578963Z" level=info msg="RemovePodSandbox for \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\"" Jul 10 01:27:39.373704 env[1363]: time="2025-07-10T01:27:39.342610958Z" level=info msg="Forcibly stopping sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\"" Jul 10 01:27:39.544239 systemd[1]: run-containerd-runc-k8s.io-f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c-runc.raxwX2.mount: Deactivated successfully. Jul 10 01:27:40.905023 systemd[1]: Started sshd@21-139.178.70.102:22-139.178.68.195:54674.service. Jul 10 01:27:41.048345 kernel: audit: type=1130 audit(1752110860.909:585): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.102:22-139.178.68.195:54674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:27:40.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.102:22-139.178.68.195:54674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:27:42.175000 audit[6709]: USER_ACCT pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:42.338299 kernel: audit: type=1101 audit(1752110862.175:586): pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:42.362328 kernel: audit: type=1103 audit(1752110862.268:587): pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:42.363706 kernel: audit: type=1006 audit(1752110862.273:588): pid=6709 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jul 10 01:27:42.365447 kernel: audit: type=1300 audit(1752110862.273:588): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3f244f20 a2=3 a3=0 items=0 ppid=1 pid=6709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:27:42.365476 kernel: audit: type=1327 audit(1752110862.273:588): proctitle=737368643A20636F7265205B707269765D Jul 10 01:27:42.268000 audit[6709]: CRED_ACQ pid=6709 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:42.273000 audit[6709]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe3f244f20 a2=3 a3=0 items=0 ppid=1 pid=6709 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:27:42.273000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:27:42.431967 sshd[6709]: Accepted publickey for core from 139.178.68.195 port 54674 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:27:42.302052 sshd[6709]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:27:42.516066 systemd-logind[1351]: New session 23 of user core. Jul 10 01:27:42.520854 systemd[1]: Started session-23.scope. Jul 10 01:27:42.582000 audit[6709]: USER_START pid=6709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:42.599314 kernel: audit: type=1105 audit(1752110862.582:589): pid=6709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:42.627000 audit[6716]: CRED_ACQ pid=6716 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:42.635020 kernel: audit: type=1103 audit(1752110862.627:590): pid=6716 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:45.595 [WARNING][6652] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3459c244-a1ae-43bc-ad86-239a6e665757", ResourceVersion:"1829", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714", Pod:"coredns-7c65d6cfc9-snhl5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie0bf60675d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:45.648 [INFO][6652] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:45.650 [INFO][6652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" iface="eth0" netns="" Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:45.653 [INFO][6652] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:45.653 [INFO][6652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:47.197 [INFO][6738] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:47.241 [INFO][6738] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:47.244 [INFO][6738] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:47.558 [WARNING][6738] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:47.578 [INFO][6738] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" HandleID="k8s-pod-network.14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:47.587 [INFO][6738] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:27:47.658364 env[1363]: 2025-07-10 01:27:47.623 [INFO][6652] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5" Jul 10 01:27:47.658364 env[1363]: time="2025-07-10T01:27:47.650271199Z" level=info msg="TearDown network for sandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\" successfully" Jul 10 01:27:47.763775 env[1363]: time="2025-07-10T01:27:47.669177078Z" level=info msg="RemovePodSandbox \"14835a1d23b75e28ba1fef0944ee52d74bf2ce2c1e062de723f2121f6c8271e5\" returns successfully" Jul 10 01:28:23.886884 kubelet[2299]: W0710 01:27:48.810325 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@20-139.178.70.102:22-139.178.68.195:46048.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@20-139.178.70.102:22-139.178.68.195:46048.service: no such file or directory Jul 10 01:28:26.396137 sshd[6709]: pam_unix(sshd:session): session closed for user core Jul 10 01:28:26.495739 kernel: audit: type=1106 audit(1752110906.443:591): pid=6709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:26.500282 kernel: audit: type=1104 audit(1752110906.443:592): pid=6709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:26.500320 kernel: audit: type=1131 audit(1752110906.454:593): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.102:22-139.178.68.195:54674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:28:26.443000 audit[6709]: USER_END pid=6709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:26.443000 audit[6709]: CRED_DISP pid=6709 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:26.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-139.178.70.102:22-139.178.68.195:54674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:28:26.451650 systemd[1]: sshd@21-139.178.70.102:22-139.178.68.195:54674.service: Deactivated successfully. Jul 10 01:28:26.461857 systemd[1]: session-23.scope: Deactivated successfully. Jul 10 01:28:26.462102 systemd-logind[1351]: Session 23 logged out. Waiting for processes to exit. Jul 10 01:28:26.482098 systemd-logind[1351]: Removed session 23. Jul 10 01:28:27.408285 kubelet[2299]: W0710 01:28:27.405738 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@20-139.178.70.102:22-139.178.68.195:46048.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@20-139.178.70.102:22-139.178.68.195:46048.service: no such file or directory Jul 10 01:28:27.408285 kubelet[2299]: W0710 01:28:27.408417 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-22.scope: no such file or directory Jul 10 01:28:27.408285 kubelet[2299]: W0710 01:28:27.408465 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-22.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-22.scope: no such file or directory Jul 10 01:28:27.408285 kubelet[2299]: W0710 01:28:27.408491 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@21-139.178.70.102:22-139.178.68.195:54674.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@21-139.178.70.102:22-139.178.68.195:54674.service: no such file or directory Jul 10 01:28:27.408285 kubelet[2299]: W0710 01:28:27.408502 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@21-139.178.70.102:22-139.178.68.195:54674.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@21-139.178.70.102:22-139.178.68.195:54674.service: no such file or directory Jul 10 01:28:27.408285 kubelet[2299]: W0710 01:28:27.408512 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-23.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-23.scope: no such file or directory Jul 10 01:28:27.408285 kubelet[2299]: W0710 01:28:27.408519 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-23.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-23.scope: no such file or directory Jul 10 01:28:29.524547 kubelet[2299]: E0710 01:28:27.637791 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m56.918318633s ago; threshold is 3m0s]" Jul 10 01:28:31.460282 systemd[1]: Started sshd@22-139.178.70.102:22-139.178.68.195:49118.service. Jul 10 01:28:31.546407 kernel: audit: type=1130 audit(1752110911.461:594): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.102:22-139.178.68.195:49118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:28:31.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.102:22-139.178.68.195:49118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:28:31.866154 sshd[6762]: Accepted publickey for core from 139.178.68.195 port 49118 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:28:31.922728 kernel: audit: type=1101 audit(1752110911.865:595): pid=6762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:31.925557 kernel: audit: type=1103 audit(1752110911.871:596): pid=6762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:31.925593 kernel: audit: type=1006 audit(1752110911.871:597): pid=6762 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jul 10 01:28:31.925649 kernel: audit: type=1300 audit(1752110911.871:597): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8ca9e890 a2=3 a3=0 items=0 ppid=1 pid=6762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:28:31.927203 kernel: audit: type=1327 audit(1752110911.871:597): proctitle=737368643A20636F7265205B707269765D Jul 10 01:28:31.865000 audit[6762]: USER_ACCT pid=6762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:31.871000 audit[6762]: CRED_ACQ pid=6762 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:31.871000 audit[6762]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff8ca9e890 a2=3 a3=0 items=0 ppid=1 pid=6762 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:28:31.871000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:28:31.887623 sshd[6762]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:28:32.015864 systemd-logind[1351]: New session 24 of user core. Jul 10 01:28:32.019296 systemd[1]: Started session-24.scope. Jul 10 01:28:32.028000 audit[6762]: USER_START pid=6762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:32.033975 kernel: audit: type=1105 audit(1752110912.028:598): pid=6762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:32.035000 audit[6765]: CRED_ACQ pid=6765 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:28:32.039742 kernel: audit: type=1103 audit(1752110912.035:599): pid=6765 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:33.280026 sshd[6762]: pam_unix(sshd:session): session closed for user core Jul 10 01:29:33.412984 kernel: audit: type=1106 audit(1752110973.330:600): pid=6762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:33.419100 kernel: audit: type=1104 audit(1752110973.330:601): pid=6762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:33.419132 kernel: audit: type=1131 audit(1752110973.344:602): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.102:22-139.178.68.195:49118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:29:33.330000 audit[6762]: USER_END pid=6762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:33.330000 audit[6762]: CRED_DISP pid=6762 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:33.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-139.178.70.102:22-139.178.68.195:49118 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:29:33.342735 systemd[1]: sshd@22-139.178.70.102:22-139.178.68.195:49118.service: Deactivated successfully. Jul 10 01:29:33.349879 systemd[1]: session-24.scope: Deactivated successfully. Jul 10 01:29:33.350276 systemd-logind[1351]: Session 24 logged out. Waiting for processes to exit. Jul 10 01:29:33.369233 systemd-logind[1351]: Removed session 24. Jul 10 01:29:38.443996 kernel: audit: type=1130 audit(1752110978.435:603): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.102:22-139.178.68.195:59358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:29:38.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.102:22-139.178.68.195:59358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:29:38.429432 systemd[1]: Started sshd@23-139.178.70.102:22-139.178.68.195:59358.service. Jul 10 01:29:38.976901 kernel: audit: type=1101 audit(1752110978.972:604): pid=6826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.031713 kernel: audit: type=1103 audit(1752110979.008:605): pid=6826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.033938 kernel: audit: type=1006 audit(1752110979.008:606): pid=6826 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jul 10 01:29:39.035212 kernel: audit: type=1300 audit(1752110979.008:606): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc04153c60 a2=3 a3=0 items=0 ppid=1 pid=6826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:29:39.038615 kernel: audit: type=1327 audit(1752110979.008:606): proctitle=737368643A20636F7265205B707269765D Jul 10 01:29:38.972000 audit[6826]: USER_ACCT pid=6826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.008000 audit[6826]: CRED_ACQ pid=6826 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.008000 audit[6826]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc04153c60 a2=3 a3=0 items=0 ppid=1 pid=6826 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:29:39.008000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:29:39.104714 sshd[6826]: Accepted publickey for core from 139.178.68.195 port 59358 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:29:39.031500 sshd[6826]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:29:39.253792 systemd-logind[1351]: New session 25 of user core. Jul 10 01:29:39.298890 kernel: audit: type=1105 audit(1752110979.288:607): pid=6826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.288000 audit[6826]: USER_START pid=6826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.368826 kernel: audit: type=1103 audit(1752110979.335:608): pid=6829 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.335000 audit[6829]: CRED_ACQ pid=6829 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:29:39.256804 systemd[1]: Started session-25.scope. Jul 10 01:30:25.130337 sshd[6826]: pam_unix(sshd:session): session closed for user core Jul 10 01:30:25.256896 kernel: audit: type=1106 audit(1752111025.202:609): pid=6826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:25.259563 kernel: audit: type=1104 audit(1752111025.203:610): pid=6826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:25.259607 kernel: audit: type=1131 audit(1752111025.216:611): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.102:22-139.178.68.195:59358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:30:25.202000 audit[6826]: USER_END pid=6826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:25.203000 audit[6826]: CRED_DISP pid=6826 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:25.216000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-139.178.70.102:22-139.178.68.195:59358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:30:25.213100 systemd[1]: sshd@23-139.178.70.102:22-139.178.68.195:59358.service: Deactivated successfully. Jul 10 01:30:25.222189 systemd[1]: session-25.scope: Deactivated successfully. Jul 10 01:30:25.222538 systemd-logind[1351]: Session 25 logged out. Waiting for processes to exit. Jul 10 01:30:25.247954 systemd-logind[1351]: Removed session 25. Jul 10 01:30:26.879479 kubelet[2299]: W0710 01:29:53.304049 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@22-139.178.70.102:22-139.178.68.195:49118.service": 0x40000100 == IN_CREATE|IN_ISDIR): readdirent /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@22-139.178.70.102:22-139.178.68.195:49118.service: no such file or directory Jul 10 01:30:27.876649 kubelet[2299]: E0710 01:30:26.686200 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 4m53.201778175s ago; threshold is 3m0s]" Jul 10 01:30:30.219012 systemd[1]: Started sshd@24-139.178.70.102:22-139.178.68.195:46150.service. Jul 10 01:30:30.329066 kernel: audit: type=1130 audit(1752111030.221:612): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.102:22-139.178.68.195:46150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:30:30.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.102:22-139.178.68.195:46150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:30:31.031000 audit[6858]: USER_ACCT pid=6858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:31.210464 kernel: audit: type=1101 audit(1752111031.031:613): pid=6858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:31.234740 kernel: audit: type=1103 audit(1752111031.189:614): pid=6858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:31.234794 kernel: audit: type=1006 audit(1752111031.197:615): pid=6858 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jul 10 01:30:31.237354 kernel: audit: type=1300 audit(1752111031.197:615): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1abb2130 a2=3 a3=0 items=0 ppid=1 pid=6858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:30:31.239892 kernel: audit: type=1327 audit(1752111031.197:615): proctitle=737368643A20636F7265205B707269765D Jul 10 01:30:31.189000 audit[6858]: CRED_ACQ pid=6858 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:31.197000 audit[6858]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc1abb2130 a2=3 a3=0 items=0 ppid=1 pid=6858 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:30:31.197000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:30:31.290720 sshd[6858]: Accepted publickey for core from 139.178.68.195 port 46150 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:30:31.221336 sshd[6858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:30:31.429857 systemd-logind[1351]: New session 26 of user core. Jul 10 01:30:31.436413 systemd[1]: Started session-26.scope. Jul 10 01:30:31.482000 audit[6858]: USER_START pid=6858 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:31.501860 kernel: audit: type=1105 audit(1752111031.482:616): pid=6858 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:31.529000 audit[6861]: CRED_ACQ pid=6861 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:30:31.546470 kernel: audit: type=1103 audit(1752111031.529:617): pid=6861 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:16.634377 sshd[6858]: pam_unix(sshd:session): session closed for user core Jul 10 01:31:16.764593 kernel: audit: type=1106 audit(1752111076.696:618): pid=6858 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:16.768651 kernel: audit: type=1104 audit(1752111076.696:619): pid=6858 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:16.768686 kernel: audit: type=1131 audit(1752111076.707:620): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.102:22-139.178.68.195:46150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:31:16.696000 audit[6858]: USER_END pid=6858 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:16.696000 audit[6858]: CRED_DISP pid=6858 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:16.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-139.178.70.102:22-139.178.68.195:46150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:31:16.705801 systemd[1]: sshd@24-139.178.70.102:22-139.178.68.195:46150.service: Deactivated successfully. Jul 10 01:31:16.713006 systemd[1]: session-26.scope: Deactivated successfully. Jul 10 01:31:16.713041 systemd-logind[1351]: Session 26 logged out. Waiting for processes to exit. Jul 10 01:31:16.736550 systemd-logind[1351]: Removed session 26. Jul 10 01:31:17.772457 kubelet[2299]: W0710 01:31:17.758571 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:31:18.054877 kubelet[2299]: E0710 01:30:27.853409 2299 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Jul 10 01:31:18.162667 kubelet[2299]: W0710 01:31:18.162623 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:31:18.446517 kubelet[2299]: W0710 01:31:17.487588 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:31:20.828569 kubelet[2299]: E0710 01:31:20.002844 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 6m47.048890476s ago; threshold is 3m0s]" Jul 10 01:31:20.980998 env[1363]: time="2025-07-10T01:31:20.980871989Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:31:21.005142 env[1363]: time="2025-07-10T01:31:21.004928142Z" level=error msg="get state for 0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" error="context deadline exceeded: unknown" Jul 10 01:31:21.005142 env[1363]: time="2025-07-10T01:31:21.004963405Z" level=warning msg="unknown status" status=0 Jul 10 01:31:21.023353 env[1363]: time="2025-07-10T01:31:21.023316561Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="failed to exec in container: failed to create exec \"71ca2aa3c2db08aa05be27e4fc4794ad1f42842cc11aa107e6449be29fae08a3\": context deadline exceeded: unknown" Jul 10 01:31:21.097189 kubelet[2299]: E0710 01:31:19.576868 2299 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b" Jul 10 01:31:21.771800 systemd[1]: Started sshd@25-139.178.70.102:22-139.178.68.195:33276.service. Jul 10 01:31:21.904802 kernel: audit: type=1130 audit(1752111081.777:621): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.70.102:22-139.178.68.195:33276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:31:21.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.70.102:22-139.178.68.195:33276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:31:22.567000 audit[6976]: USER_ACCT pid=6976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:22.730337 kernel: audit: type=1101 audit(1752111082.567:622): pid=6976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:22.770973 kernel: audit: type=1103 audit(1752111082.684:623): pid=6976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:22.774291 kernel: audit: type=1006 audit(1752111082.706:624): pid=6976 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jul 10 01:31:22.774344 kernel: audit: type=1300 audit(1752111082.706:624): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe182e1900 a2=3 a3=0 items=0 ppid=1 pid=6976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:31:22.774369 kernel: audit: type=1327 audit(1752111082.706:624): proctitle=737368643A20636F7265205B707269765D Jul 10 01:31:22.684000 audit[6976]: CRED_ACQ pid=6976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:22.706000 audit[6976]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe182e1900 a2=3 a3=0 items=0 ppid=1 pid=6976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:31:22.706000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:31:22.851726 sshd[6976]: Accepted publickey for core from 139.178.68.195 port 33276 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:31:22.745971 sshd[6976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:31:23.054736 systemd-logind[1351]: New session 27 of user core. Jul 10 01:31:23.150734 kernel: audit: type=1105 audit(1752111083.116:625): pid=6976 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:23.116000 audit[6976]: USER_START pid=6976 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:23.212396 kernel: audit: type=1103 audit(1752111083.206:626): pid=6979 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:23.206000 audit[6979]: CRED_ACQ pid=6979 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:31:23.058859 systemd[1]: Started session-27.scope. Jul 10 01:31:26.221919 env[1363]: time="2025-07-10T01:31:26.197232661Z" level=error msg="ExecSync for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" failed" error="rpc error: code = DeadlineExceeded desc = failed to exec in container: timeout 5s exceeded: context deadline exceeded" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.286 [WARNING][6945] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--zxwst-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ced04dc5-79ee-4a07-a568-b0fd4007f64c", ResourceVersion:"1828", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742", Pod:"goldmane-58fd7646b9-zxwst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida2f92a11f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.338 [INFO][6945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.339 [INFO][6945] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" iface="eth0" netns="" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.340 [INFO][6945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.340 [INFO][6945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.823 [INFO][6990] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.838 [INFO][6990] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:27.841 [INFO][6990] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:28.092 [WARNING][6990] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:28.098 [INFO][6990] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:28.113 [INFO][6990] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:31:28.158354 env[1363]: 2025-07-10 01:31:28.132 [INFO][6945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:31:28.158354 env[1363]: time="2025-07-10T01:31:28.156163318Z" level=info msg="TearDown network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" successfully" Jul 10 01:31:28.158354 env[1363]: time="2025-07-10T01:31:28.156187525Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" returns successfully" Jul 10 01:31:37.841832 kubelet[2299]: W0710 01:31:17.179189 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@22-139.178.70.102:22-139.178.68.195:49118.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@22-139.178.70.102:22-139.178.68.195:49118.service: no such file or directory Jul 10 01:31:54.327243 kubelet[2299]: W0710 01:31:21.477021 2299 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:32:04.155184 sshd[6976]: pam_unix(sshd:session): session closed for user core Jul 10 01:32:04.267988 kernel: audit: type=1106 audit(1752111124.221:627): pid=6976 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:04.277076 kernel: audit: type=1104 audit(1752111124.225:628): pid=6976 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:04.277113 kernel: audit: type=1131 audit(1752111124.233:629): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.70.102:22-139.178.68.195:33276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:32:04.221000 audit[6976]: USER_END pid=6976 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:04.225000 audit[6976]: CRED_DISP pid=6976 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:04.233000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-139.178.70.102:22-139.178.68.195:33276 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:32:04.230567 systemd[1]: sshd@25-139.178.70.102:22-139.178.68.195:33276.service: Deactivated successfully. Jul 10 01:32:04.238981 systemd[1]: session-27.scope: Deactivated successfully. Jul 10 01:32:04.239008 systemd-logind[1351]: Session 27 logged out. Waiting for processes to exit. Jul 10 01:32:04.277213 systemd-logind[1351]: Removed session 27. Jul 10 01:32:09.224466 systemd[1]: Started sshd@26-139.178.70.102:22-139.178.68.195:45664.service. Jul 10 01:32:09.303760 kernel: audit: type=1130 audit(1752111129.227:630): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.70.102:22-139.178.68.195:45664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:32:09.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.70.102:22-139.178.68.195:45664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:32:09.475967 sshd[7006]: Accepted publickey for core from 139.178.68.195 port 45664 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:32:09.490249 kernel: audit: type=1101 audit(1752111129.475:631): pid=7006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:09.490675 kernel: audit: type=1103 audit(1752111129.479:632): pid=7006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:09.490700 kernel: audit: type=1006 audit(1752111129.479:633): pid=7006 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jul 10 01:32:09.490717 kernel: audit: type=1300 audit(1752111129.479:633): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7447f0a0 a2=3 a3=0 items=0 ppid=1 pid=7006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:32:09.491294 kernel: audit: type=1327 audit(1752111129.479:633): proctitle=737368643A20636F7265205B707269765D Jul 10 01:32:09.475000 audit[7006]: USER_ACCT pid=7006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:09.479000 audit[7006]: CRED_ACQ pid=7006 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:09.479000 audit[7006]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe7447f0a0 a2=3 a3=0 items=0 ppid=1 pid=7006 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:32:09.479000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:32:09.491477 sshd[7006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:32:09.529107 systemd[1]: Started session-28.scope. Jul 10 01:32:09.530175 systemd-logind[1351]: New session 28 of user core. Jul 10 01:32:09.539713 kernel: audit: type=1105 audit(1752111129.534:634): pid=7006 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:09.534000 audit[7006]: USER_START pid=7006 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:09.539000 audit[7009]: CRED_ACQ pid=7009 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:09.547941 kernel: audit: type=1103 audit(1752111129.539:635): pid=7009 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:59.538392 sshd[7006]: pam_unix(sshd:session): session closed for user core Jul 10 01:32:59.652713 kernel: audit: type=1106 audit(1752111179.590:636): pid=7006 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:59.657681 kernel: audit: type=1104 audit(1752111179.595:637): pid=7006 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:59.657717 kernel: audit: type=1131 audit(1752111179.605:638): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.70.102:22-139.178.68.195:45664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:32:59.590000 audit[7006]: USER_END pid=7006 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:59.595000 audit[7006]: CRED_DISP pid=7006 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:32:59.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-139.178.70.102:22-139.178.68.195:45664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:32:59.602867 systemd[1]: sshd@26-139.178.70.102:22-139.178.68.195:45664.service: Deactivated successfully. Jul 10 01:32:59.607293 systemd[1]: session-28.scope: Deactivated successfully. Jul 10 01:32:59.607576 systemd-logind[1351]: Session 28 logged out. Waiting for processes to exit. Jul 10 01:32:59.630577 systemd-logind[1351]: Removed session 28. Jul 10 01:33:03.113407 kubelet[2299]: E0710 01:31:21.341813 2299 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" Jul 10 01:33:03.113407 kubelet[2299]: E0710 01:33:01.408867 2299 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg was last seen active 7m44.137531256s ago; threshold is 3m0s" Jul 10 01:33:03.113407 kubelet[2299]: E0710 01:32:06.618212 2299 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:33:04.603538 systemd[1]: Started sshd@27-139.178.70.102:22-139.178.68.195:40552.service. Jul 10 01:33:04.682761 kernel: audit: type=1130 audit(1752111184.605:639): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-139.178.70.102:22-139.178.68.195:40552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:33:04.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-139.178.70.102:22-139.178.68.195:40552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:33:04.859310 kubelet[2299]: W0710 01:32:08.249266 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-24.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-24.scope: no such file or directory Jul 10 01:33:04.973870 sshd[7053]: Accepted publickey for core from 139.178.68.195 port 40552 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:33:04.990361 kernel: audit: type=1101 audit(1752111184.972:640): pid=7053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:33:04.997450 kernel: audit: type=1103 audit(1752111184.981:641): pid=7053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:33:04.997478 kernel: audit: type=1006 audit(1752111184.981:642): pid=7053 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jul 10 01:33:04.999034 kernel: audit: type=1300 audit(1752111184.981:642): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc7eaa0d0 a2=3 a3=0 items=0 ppid=1 pid=7053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:33:04.999068 kernel: audit: type=1327 audit(1752111184.981:642): proctitle=737368643A20636F7265205B707269765D Jul 10 01:33:04.972000 audit[7053]: USER_ACCT pid=7053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:33:04.981000 audit[7053]: CRED_ACQ pid=7053 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:33:04.981000 audit[7053]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcc7eaa0d0 a2=3 a3=0 items=0 ppid=1 pid=7053 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:33:04.981000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:33:05.001878 sshd[7053]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:33:05.228693 systemd-logind[1351]: New session 29 of user core. Jul 10 01:33:05.290327 kernel: audit: type=1105 audit(1752111185.269:643): pid=7053 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:33:05.269000 audit[7053]: USER_START pid=7053 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:33:05.234142 systemd[1]: Started session-29.scope. Jul 10 01:33:05.326073 kernel: audit: type=1103 audit(1752111185.302:644): pid=7056 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:33:05.302000 audit[7056]: CRED_ACQ pid=7056 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:37.693575 sshd[7053]: pam_unix(sshd:session): session closed for user core Jul 10 01:34:37.795197 kernel: audit: type=1106 audit(1752111277.750:645): pid=7053 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:37.799800 kernel: audit: type=1104 audit(1752111277.757:646): pid=7053 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:37.799836 kernel: audit: type=1131 audit(1752111277.764:647): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-139.178.70.102:22-139.178.68.195:40552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:34:37.750000 audit[7053]: USER_END pid=7053 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:37.757000 audit[7053]: CRED_DISP pid=7053 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:37.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-139.178.70.102:22-139.178.68.195:40552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:34:37.761783 systemd[1]: sshd@27-139.178.70.102:22-139.178.68.195:40552.service: Deactivated successfully. Jul 10 01:34:37.770086 systemd[1]: session-29.scope: Deactivated successfully. Jul 10 01:34:37.770696 systemd-logind[1351]: Session 29 logged out. Waiting for processes to exit. Jul 10 01:34:37.791845 systemd-logind[1351]: Removed session 29. Jul 10 01:34:38.230081 kubelet[2299]: E0710 01:34:38.221096 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" podSandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:34:38.248115 kubelet[2299]: E0710 01:33:57.549772 2299 container_log_manager.go:274] "Failed to get container status" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" worker=1 containerID="dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" Jul 10 01:34:38.322159 kubelet[2299]: E0710 01:34:38.322129 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-ready"] Jul 10 01:34:38.360038 kubelet[2299]: E0710 01:34:38.359995 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c" cmd=["/usr/bin/check-status","-l"] Jul 10 01:34:38.458831 kubelet[2299]: I0710 01:32:08.358510 2299 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-10T01:29:34Z","lastTransitionTime":"2025-07-10T01:29:34Z","reason":"KubeletNotReady","message":"[container runtime is down, PLEG is not healthy: pleg was last seen active 5m44.463305922s ago; threshold is 3m0s]"} Jul 10 01:34:38.491179 kubelet[2299]: E0710 01:34:16.628821 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-live"] Jul 10 01:34:38.598581 kubelet[2299]: I0710 01:34:17.777859 2299 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:34:40.730863 kubelet[2299]: W0710 01:34:30.593099 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-24.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-24.scope: no such file or directory Jul 10 01:34:42.786828 systemd[1]: Started sshd@28-139.178.70.102:22-139.178.68.195:42776.service. Jul 10 01:34:42.889946 kernel: audit: type=1130 audit(1752111282.788:648): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-139.178.70.102:22-139.178.68.195:42776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:34:42.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-139.178.70.102:22-139.178.68.195:42776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:34:43.476000 audit[7107]: USER_ACCT pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:43.575832 kernel: audit: type=1101 audit(1752111283.476:649): pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:43.590898 kernel: audit: type=1103 audit(1752111283.561:650): pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:43.592545 kernel: audit: type=1006 audit(1752111283.567:651): pid=7107 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Jul 10 01:34:43.595330 kernel: audit: type=1300 audit(1752111283.567:651): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf0f90860 a2=3 a3=0 items=0 ppid=1 pid=7107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:34:43.598664 kernel: audit: type=1327 audit(1752111283.567:651): proctitle=737368643A20636F7265205B707269765D Jul 10 01:34:43.561000 audit[7107]: CRED_ACQ pid=7107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:43.567000 audit[7107]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcf0f90860 a2=3 a3=0 items=0 ppid=1 pid=7107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:34:43.567000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:34:43.639164 sshd[7107]: Accepted publickey for core from 139.178.68.195 port 42776 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:34:43.591526 sshd[7107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:34:43.822444 systemd-logind[1351]: New session 30 of user core. Jul 10 01:34:43.826501 systemd[1]: Started session-30.scope. Jul 10 01:34:43.882269 kernel: audit: type=1105 audit(1752111283.867:652): pid=7107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:43.867000 audit[7107]: USER_START pid=7107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:43.924223 kernel: audit: type=1103 audit(1752111283.907:653): pid=7110 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:43.907000 audit[7110]: CRED_ACQ pid=7110 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:34:48.200628 kubelet[2299]: E0710 01:34:38.281052 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c" cmd=["/usr/bin/check-status","-r"] Jul 10 01:35:50.985912 kubelet[2299]: E0710 01:34:41.972954 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jul 10 01:38:06.926038 env[1363]: time="2025-07-10T01:38:06.921224856Z" level=info msg="shim disconnected" id=42c4f72e06364455d75dc5e1a2d8db5f45b4c495410e92ef5effcfafc52d9353 Jul 10 01:38:06.926038 env[1363]: time="2025-07-10T01:38:06.921307551Z" level=warning msg="cleaning up after shim disconnected" id=42c4f72e06364455d75dc5e1a2d8db5f45b4c495410e92ef5effcfafc52d9353 namespace=k8s.io Jul 10 01:38:06.926038 env[1363]: time="2025-07-10T01:38:06.921321670Z" level=info msg="cleaning up dead shim" Jul 10 01:38:06.926038 env[1363]: time="2025-07-10T01:38:06.934658086Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7226 runtime=io.containerd.runc.v2\n" Jul 10 01:38:07.078186 env[1363]: time="2025-07-10T01:38:06.946473879Z" level=info msg="shim disconnected" id=1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532 Jul 10 01:38:07.078186 env[1363]: time="2025-07-10T01:38:06.946508014Z" level=warning msg="cleaning up after shim disconnected" id=1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532 namespace=k8s.io Jul 10 01:38:07.078186 env[1363]: time="2025-07-10T01:38:06.946516354Z" level=info msg="cleaning up dead shim" Jul 10 01:38:07.078186 env[1363]: time="2025-07-10T01:38:06.956873249Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7251 runtime=io.containerd.runc.v2\n" Jul 10 01:38:07.009729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532-rootfs.mount: Deactivated successfully. Jul 10 01:38:07.009852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42c4f72e06364455d75dc5e1a2d8db5f45b4c495410e92ef5effcfafc52d9353-rootfs.mount: Deactivated successfully. Jul 10 01:38:07.206532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7f80148b1dd15cbd59d6e22ff09bf3b5bae95d8070822acb223d22a170cfe84-rootfs.mount: Deactivated successfully. Jul 10 01:38:07.207480 env[1363]: time="2025-07-10T01:38:07.206950020Z" level=info msg="shim disconnected" id=c7f80148b1dd15cbd59d6e22ff09bf3b5bae95d8070822acb223d22a170cfe84 Jul 10 01:38:07.207480 env[1363]: time="2025-07-10T01:38:07.207009096Z" level=warning msg="cleaning up after shim disconnected" id=c7f80148b1dd15cbd59d6e22ff09bf3b5bae95d8070822acb223d22a170cfe84 namespace=k8s.io Jul 10 01:38:07.207480 env[1363]: time="2025-07-10T01:38:07.207015794Z" level=info msg="cleaning up dead shim" Jul 10 01:38:07.215983 env[1363]: time="2025-07-10T01:38:07.215944887Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7276 runtime=io.containerd.runc.v2\n" Jul 10 01:38:07.941955 sshd[7107]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:07.969000 audit[7107]: USER_END pid=7107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:07.977760 kernel: audit: type=1106 audit(1752111487.969:654): pid=7107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:07.976000 audit[7107]: CRED_DISP pid=7107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:07.979291 systemd[1]: sshd@28-139.178.70.102:22-139.178.68.195:42776.service: Deactivated successfully. Jul 10 01:38:07.979840 systemd[1]: session-30.scope: Deactivated successfully. Jul 10 01:38:07.981653 kernel: audit: type=1104 audit(1752111487.976:655): pid=7107 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:07.982906 kernel: audit: type=1131 audit(1752111487.977:656): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-139.178.70.102:22-139.178.68.195:42776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:07.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-139.178.70.102:22-139.178.68.195:42776 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:07.985532 systemd-logind[1351]: Session 30 logged out. Waiting for processes to exit. Jul 10 01:38:07.990579 systemd-logind[1351]: Removed session 30. Jul 10 01:38:08.215766 kubelet[2299]: E0710 01:34:42.258724 2299 log.go:32] "ListContainers with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}" Jul 10 01:38:08.273437 kubelet[2299]: E0710 01:38:08.273407 2299 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Jul 10 01:38:08.274150 kubelet[2299]: E0710 01:38:08.274126 2299 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66" Jul 10 01:38:08.282119 kubelet[2299]: E0710 01:38:08.282071 2299 container_log_manager.go:274] "Failed to get container status" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" worker=1 containerID="915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66" Jul 10 01:38:08.292204 kubelet[2299]: E0710 01:34:40.887835 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/events\": dial tcp 139.178.70.102:6443: i/o timeout" event="&Event{ObjectMeta:{kube-controller-manager-localhost.1850bef8d49c3c5e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-controller-manager-localhost,UID:3f04709fe51ae4ab5abd58e8da771b74,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://127.0.0.1:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:17:10.975298654 +0000 UTC m=+250.843725545,LastTimestamp:2025-07-10 01:17:10.975298654 +0000 UTC m=+250.843725545,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:38:08.296441 kubelet[2299]: E0710 01:35:11.399447 2299 kuberuntime_manager.go:1599] "getPodContainerStatuses for pod failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" Jul 10 01:38:08.300310 kubelet[2299]: E0710 01:38:08.300293 2299 generic.go:453] "PLEG: Write status" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" Jul 10 01:38:08.319511 kubelet[2299]: E0710 01:38:08.319490 2299 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 10m25.689114331s ago; threshold is 3m0s]" Jul 10 01:38:08.321282 kubelet[2299]: I0710 01:38:08.321266 2299 request.go:700] Waited for 1.145038691s due to client-side throttling, not priority and fairness, request: GET:https://139.178.70.102:6443/apis/node.k8s.io/v1/runtimeclasses?resourceVersion=2326 Jul 10 01:38:08.322609 kubelet[2299]: E0710 01:38:08.322586 2299 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Jul 10 01:38:08.323197 kubelet[2299]: E0710 01:38:08.322618 2299 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.323197 kubelet[2299]: I0710 01:38:08.322628 2299 image_gc_manager.go:214] "Failed to monitor images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.329422 kubelet[2299]: E0710 01:38:08.329405 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" cmd=["/health","-live"] Jul 10 01:38:08.342707 kubelet[2299]: W0710 01:38:08.342690 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2681": net/http: TLS handshake timeout Jul 10 01:38:08.342905 kubelet[2299]: E0710 01:38:08.342872 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.102:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&resourceVersion=2681\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:38:08.345631 kubelet[2299]: E0710 01:38:08.345298 2299 log.go:32] "Status from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.348672 kubelet[2299]: E0710 01:38:08.348655 2299 kubelet.go:2887] "Container runtime sanity check failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.348715 kubelet[2299]: W0710 01:37:19.949417 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@23-139.178.70.102:22-139.178.68.195:59358.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@23-139.178.70.102:22-139.178.68.195:59358.service: no such file or directory Jul 10 01:38:08.348715 kubelet[2299]: W0710 01:38:08.348697 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@23-139.178.70.102:22-139.178.68.195:59358.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@23-139.178.70.102:22-139.178.68.195:59358.service: no such file or directory Jul 10 01:38:08.348765 kubelet[2299]: W0710 01:38:08.348718 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-25.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-25.scope: no such file or directory Jul 10 01:38:08.348765 kubelet[2299]: W0710 01:38:08.348732 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-25.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-25.scope: no such file or directory Jul 10 01:38:08.348765 kubelet[2299]: W0710 01:38:08.348759 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@24-139.178.70.102:22-139.178.68.195:46150.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@24-139.178.70.102:22-139.178.68.195:46150.service: no such file or directory Jul 10 01:38:08.348832 kubelet[2299]: W0710 01:38:08.348767 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@24-139.178.70.102:22-139.178.68.195:46150.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@24-139.178.70.102:22-139.178.68.195:46150.service: no such file or directory Jul 10 01:38:08.348832 kubelet[2299]: W0710 01:38:08.348775 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-26.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-26.scope: no such file or directory Jul 10 01:38:08.348832 kubelet[2299]: W0710 01:38:08.348782 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-26.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-26.scope: no such file or directory Jul 10 01:38:08.351024 kubelet[2299]: W0710 01:38:08.348792 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@25-139.178.70.102:22-139.178.68.195:33276.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@25-139.178.70.102:22-139.178.68.195:33276.service: no such file or directory Jul 10 01:38:08.351068 kubelet[2299]: W0710 01:38:08.351028 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@25-139.178.70.102:22-139.178.68.195:33276.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@25-139.178.70.102:22-139.178.68.195:33276.service: no such file or directory Jul 10 01:38:08.351068 kubelet[2299]: W0710 01:38:08.351040 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-27.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-27.scope: no such file or directory Jul 10 01:38:08.351068 kubelet[2299]: W0710 01:38:08.351048 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-27.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-27.scope: no such file or directory Jul 10 01:38:08.351068 kubelet[2299]: W0710 01:38:08.351062 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@26-139.178.70.102:22-139.178.68.195:45664.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@26-139.178.70.102:22-139.178.68.195:45664.service: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351069 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@26-139.178.70.102:22-139.178.68.195:45664.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@26-139.178.70.102:22-139.178.68.195:45664.service: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351077 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-28.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-28.scope: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351084 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-28.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-28.scope: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351094 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@27-139.178.70.102:22-139.178.68.195:40552.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@27-139.178.70.102:22-139.178.68.195:40552.service: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351101 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@27-139.178.70.102:22-139.178.68.195:40552.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@27-139.178.70.102:22-139.178.68.195:40552.service: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351108 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-29.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-29.scope: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351114 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-29.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-29.scope: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351126 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@28-139.178.70.102:22-139.178.68.195:42776.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/system.slice/system-sshd.slice/sshd@28-139.178.70.102:22-139.178.68.195:42776.service: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351133 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@28-139.178.70.102:22-139.178.68.195:42776.service": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/system-sshd.slice/sshd@28-139.178.70.102:22-139.178.68.195:42776.service: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351141 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/memory/user.slice/user-500.slice/session-30.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/memory/user.slice/user-500.slice/session-30.scope: no such file or directory Jul 10 01:38:08.351154 kubelet[2299]: W0710 01:38:08.351151 2299 watcher.go:93] Error while processing event ("/sys/fs/cgroup/pids/user.slice/user-500.slice/session-30.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/user.slice/user-500.slice/session-30.scope: no such file or directory Jul 10 01:38:08.353542 kubelet[2299]: E0710 01:38:08.353524 2299 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="nil" Jul 10 01:38:08.353624 kubelet[2299]: E0710 01:38:08.353612 2299 kuberuntime_sandbox.go:305] "Failed to list pod sandboxes" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.353713 kubelet[2299]: E0710 01:38:08.353702 2299 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.355063 kubelet[2299]: E0710 01:38:08.355049 2299 container_log_manager.go:197] "Failed to rotate container logs" err="failed to list containers: rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.355522 kubelet[2299]: E0710 01:38:08.355500 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" containerID="dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jul 10 01:38:08.355853 kubelet[2299]: E0710 01:38:08.355838 2299 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.355853 kubelet[2299]: I0710 01:38:08.355853 2299 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" Jul 10 01:38:08.357523 env[1363]: time="2025-07-10T01:38:08.356946253Z" level=error msg="get state for f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c" error="context deadline exceeded: unknown" Jul 10 01:38:08.357523 env[1363]: time="2025-07-10T01:38:08.356971562Z" level=warning msg="unknown status" status=0 Jul 10 01:38:08.358782 env[1363]: time="2025-07-10T01:38:08.358759784Z" level=error msg="ExecSync for \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" failed" error="failed to exec in container: failed to create exec \"d1c425deb37262ff242eea9830fc0b1bd4d8e1469a496b16297be27e737a20ca\": context deadline exceeded: unknown" Jul 10 01:38:08.359115 kubelet[2299]: W0710 01:38:08.359095 2299 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2445": net/http: TLS handshake timeout Jul 10 01:38:08.359171 kubelet[2299]: E0710 01:38:08.359130 2299 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.102:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&resourceVersion=2445\": net/http: TLS handshake timeout" logger="UnhandledError" Jul 10 01:38:08.359171 kubelet[2299]: E0710 01:38:08.359140 2299 kuberuntime_gc.go:180] "Failed to stop sandbox before removing" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" sandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:08.375834 systemd[1]: run-containerd-runc-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-runc.AJlnfH.mount: Deactivated successfully. Jul 10 01:38:08.396568 systemd[1]: run-containerd-runc-k8s.io-f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c-runc.sWLEcu.mount: Deactivated successfully. Jul 10 01:38:08.500577 kubelet[2299]: E0710 01:38:08.500346 2299 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"d1c425deb37262ff242eea9830fc0b1bd4d8e1469a496b16297be27e737a20ca\": context deadline exceeded: unknown" containerID="f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c" cmd=["/usr/bin/check-status","-l"] Jul 10 01:38:08.539224 kubelet[2299]: W0710 01:38:08.537061 2299 reflector.go:484] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding Jul 10 01:38:09.532325 kubelet[2299]: I0710 01:38:09.518936 2299 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-b48c6" podStartSLOduration=1473.218175384 podStartE2EDuration="24m55.442383998s" podCreationTimestamp="2025-07-10 01:13:14 +0000 UTC" firstStartedPulling="2025-07-10 01:13:44.956413472 +0000 UTC m=+44.824840352" lastFinishedPulling="2025-07-10 01:14:07.180622082 +0000 UTC m=+67.049048966" observedRunningTime="2025-07-10 01:38:08.606051462 +0000 UTC m=+1508.474478353" watchObservedRunningTime="2025-07-10 01:38:09.442383998 +0000 UTC m=+1509.310810889" Jul 10 01:38:12.954294 systemd[1]: Started sshd@29-139.178.70.102:22-139.178.68.195:45532.service. Jul 10 01:38:12.967885 kernel: audit: type=1130 audit(1752111492.953:657): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-139.178.70.102:22-139.178.68.195:45532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:12.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-139.178.70.102:22-139.178.68.195:45532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:13.108000 audit[7415]: USER_ACCT pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.122271 kernel: audit: type=1101 audit(1752111493.108:658): pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.122335 kernel: audit: type=1103 audit(1752111493.113:659): pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.126599 kernel: audit: type=1006 audit(1752111493.115:660): pid=7415 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=31 res=1 Jul 10 01:38:13.126677 kernel: audit: type=1300 audit(1752111493.115:660): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeec115970 a2=3 a3=0 items=0 ppid=1 pid=7415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:13.126708 kernel: audit: type=1327 audit(1752111493.115:660): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:13.113000 audit[7415]: CRED_ACQ pid=7415 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.115000 audit[7415]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffeec115970 a2=3 a3=0 items=0 ppid=1 pid=7415 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=31 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:13.115000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:13.132288 sshd[7415]: Accepted publickey for core from 139.178.68.195 port 45532 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:13.129404 sshd[7415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:13.154018 systemd[1]: Started session-31.scope. Jul 10 01:38:13.154323 systemd-logind[1351]: New session 31 of user core. Jul 10 01:38:13.156000 audit[7415]: USER_START pid=7415 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.161000 audit[7418]: CRED_ACQ pid=7418 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.165342 kernel: audit: type=1105 audit(1752111493.156:661): pid=7415 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.165379 kernel: audit: type=1103 audit(1752111493.161:662): pid=7418 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.468959 kubelet[2299]: E0710 01:38:13.468925 2299 kubelet.go:2345] "Skipping pod synchronization" err="container runtime is down" Jul 10 01:38:13.942456 sshd[7415]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:13.949829 kernel: audit: type=1106 audit(1752111493.941:663): pid=7415 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.950352 kernel: audit: type=1104 audit(1752111493.941:664): pid=7415 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.941000 audit[7415]: USER_END pid=7415 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.941000 audit[7415]: CRED_DISP pid=7415 uid=0 auid=500 ses=31 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:13.953766 kernel: audit: type=1131 audit(1752111493.948:665): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-139.178.70.102:22-139.178.68.195:45532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:13.948000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-139.178.70.102:22-139.178.68.195:45532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:13.950446 systemd[1]: sshd@29-139.178.70.102:22-139.178.68.195:45532.service: Deactivated successfully. Jul 10 01:38:13.951285 systemd[1]: session-31.scope: Deactivated successfully. Jul 10 01:38:13.951654 systemd-logind[1351]: Session 31 logged out. Waiting for processes to exit. Jul 10 01:38:13.952205 systemd-logind[1351]: Removed session 31. Jul 10 01:38:16.599736 systemd[1]: run-containerd-runc-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-runc.h9Iu9w.mount: Deactivated successfully. Jul 10 01:38:18.706008 kubelet[2299]: I0710 01:38:18.705984 2299 scope.go:117] "RemoveContainer" containerID="1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532" Jul 10 01:38:18.717132 kubelet[2299]: I0710 01:38:18.717111 2299 scope.go:117] "RemoveContainer" containerID="42c4f72e06364455d75dc5e1a2d8db5f45b4c495410e92ef5effcfafc52d9353" Jul 10 01:38:18.718063 kubelet[2299]: I0710 01:38:18.717342 2299 scope.go:117] "RemoveContainer" containerID="c7f80148b1dd15cbd59d6e22ff09bf3b5bae95d8070822acb223d22a170cfe84" Jul 10 01:38:18.757423 env[1363]: time="2025-07-10T01:38:18.757400558Z" level=info msg="StopContainer for \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\" with timeout 30 (s)" Jul 10 01:38:18.764672 env[1363]: time="2025-07-10T01:38:18.757733230Z" level=info msg="StopContainer for \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\" with timeout 30 (s)" Jul 10 01:38:18.764672 env[1363]: time="2025-07-10T01:38:18.758095030Z" level=info msg="Stop container \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\" with signal terminated" Jul 10 01:38:18.764672 env[1363]: time="2025-07-10T01:38:18.758146334Z" level=info msg="Stop container \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\" with signal terminated" Jul 10 01:38:18.761523 systemd[1]: run-containerd-runc-k8s.io-f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c-runc.Z3ICTS.mount: Deactivated successfully. Jul 10 01:38:18.938496 env[1363]: time="2025-07-10T01:38:18.938060533Z" level=info msg="StopContainer for \"2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f\" with timeout 30 (s)" Jul 10 01:38:18.938496 env[1363]: time="2025-07-10T01:38:18.938177575Z" level=info msg="StopContainer for \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" with timeout 30 (s)" Jul 10 01:38:18.938496 env[1363]: time="2025-07-10T01:38:18.938255387Z" level=info msg="StopContainer for \"9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21\" with timeout 300 (s)" Jul 10 01:38:18.938496 env[1363]: time="2025-07-10T01:38:18.938340834Z" level=info msg="Stop container \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" with signal terminated" Jul 10 01:38:18.938496 env[1363]: time="2025-07-10T01:38:18.938389355Z" level=info msg="Stop container \"2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f\" with signal terminated" Jul 10 01:38:18.938496 env[1363]: time="2025-07-10T01:38:18.938428355Z" level=info msg="Stop container \"9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21\" with signal terminated" Jul 10 01:38:18.971000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-139.178.70.102:22-139.178.68.195:44494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:18.973360 systemd[1]: Started sshd@30-139.178.70.102:22-139.178.68.195:44494.service. Jul 10 01:38:18.992586 kernel: audit: type=1130 audit(1752111498.971:666): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-139.178.70.102:22-139.178.68.195:44494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:19.055240 env[1363]: time="2025-07-10T01:38:19.055213586Z" level=info msg="shim disconnected" id=f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c Jul 10 01:38:19.055432 env[1363]: time="2025-07-10T01:38:19.055422375Z" level=warning msg="cleaning up after shim disconnected" id=f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c namespace=k8s.io Jul 10 01:38:19.055511 env[1363]: time="2025-07-10T01:38:19.055480776Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.069800 env[1363]: time="2025-07-10T01:38:19.069771169Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7532 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.077816 env[1363]: time="2025-07-10T01:38:19.077768176Z" level=info msg="StopContainer for \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" returns successfully" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155022256Z" level=info msg="StopContainer for \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" with timeout 30 (s)" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155146415Z" level=info msg="StopContainer for \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" with timeout 30 (s)" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155236218Z" level=info msg="StopContainer for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" with timeout 30 (s)" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155305105Z" level=info msg="StopContainer for \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" with timeout 30 (s)" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155575067Z" level=info msg="Stop container \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" with signal terminated" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155652175Z" level=info msg="Stop container \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" with signal terminated" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155702643Z" level=info msg="Stop container \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" with signal terminated" Jul 10 01:38:19.156444 env[1363]: time="2025-07-10T01:38:19.155738236Z" level=info msg="Stop container \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" with signal terminated" Jul 10 01:38:19.190430 env[1363]: time="2025-07-10T01:38:19.190404480Z" level=info msg="CreateContainer within sandbox \"38c6fe2ffb7701339c0787fc0145f3c27d488400622b32132d0a646d4a55bb9b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 10 01:38:19.192470 env[1363]: time="2025-07-10T01:38:19.192094368Z" level=info msg="CreateContainer within sandbox \"f8dcf5beaced1e2365092d211e82d524559009db97d39d280dc1e2449686a212\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 10 01:38:19.209000 audit[7584]: NETFILTER_CFG table=filter:163 family=2 entries=11 op=nft_register_rule pid=7584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:38:19.209000 audit[7584]: SYSCALL arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff89140d30 a2=0 a3=7fff89140d1c items=0 ppid=2398 pid=7584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:19.226987 kernel: audit: type=1325 audit(1752111499.209:667): table=filter:163 family=2 entries=11 op=nft_register_rule pid=7584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:38:19.227038 kernel: audit: type=1300 audit(1752111499.209:667): arch=c000003e syscall=46 success=yes exit=4504 a0=3 a1=7fff89140d30 a2=0 a3=7fff89140d1c items=0 ppid=2398 pid=7584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:19.228260 kernel: audit: type=1327 audit(1752111499.209:667): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:38:19.228286 kernel: audit: type=1325 audit(1752111499.220:668): table=nat:164 family=2 entries=29 op=nft_unregister_chain pid=7584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:38:19.228738 kernel: audit: type=1300 audit(1752111499.220:668): arch=c000003e syscall=46 success=yes exit=6796 a0=3 a1=7fff89140d30 a2=0 a3=7fff89140d1c items=0 ppid=2398 pid=7584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:19.209000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:38:19.220000 audit[7584]: NETFILTER_CFG table=nat:164 family=2 entries=29 op=nft_unregister_chain pid=7584 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:38:19.220000 audit[7584]: SYSCALL arch=c000003e syscall=46 success=yes exit=6796 a0=3 a1=7fff89140d30 a2=0 a3=7fff89140d1c items=0 ppid=2398 pid=7584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:19.232704 kernel: audit: type=1327 audit(1752111499.220:668): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:38:19.220000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:38:19.233000 audit[7518]: USER_ACCT pid=7518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:19.238419 sshd[7518]: Accepted publickey for core from 139.178.68.195 port 44494 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:19.240766 kernel: audit: type=1101 audit(1752111499.233:669): pid=7518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:19.240800 kernel: audit: type=1103 audit(1752111499.237:670): pid=7518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:19.237000 audit[7518]: CRED_ACQ pid=7518 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:19.241530 sshd[7518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:19.244190 kubelet[2299]: W0710 01:38:19.244089 2299 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.244190 kubelet[2299]: W0710 01:38:19.244113 2299 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.247657 kernel: audit: type=1006 audit(1752111499.237:671): pid=7518 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=32 res=1 Jul 10 01:38:19.237000 audit[7518]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffda67b7230 a2=3 a3=0 items=0 ppid=1 pid=7518 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=32 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:19.237000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:19.256142 systemd-logind[1351]: New session 32 of user core. Jul 10 01:38:19.260000 audit[7518]: USER_START pid=7518 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:19.257220 systemd[1]: Started session-32.scope. Jul 10 01:38:19.263601 env[1363]: time="2025-07-10T01:38:19.257595779Z" level=info msg="shim disconnected" id=9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755 Jul 10 01:38:19.263601 env[1363]: time="2025-07-10T01:38:19.257621336Z" level=warning msg="cleaning up after shim disconnected" id=9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755 namespace=k8s.io Jul 10 01:38:19.263601 env[1363]: time="2025-07-10T01:38:19.257626834Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.263601 env[1363]: time="2025-07-10T01:38:19.259424774Z" level=info msg="shim disconnected" id=0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28 Jul 10 01:38:19.263601 env[1363]: time="2025-07-10T01:38:19.259442524Z" level=warning msg="cleaning up after shim disconnected" id=0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28 namespace=k8s.io Jul 10 01:38:19.263601 env[1363]: time="2025-07-10T01:38:19.259447727Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.262000 audit[7614]: CRED_ACQ pid=7614 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:19.266774 kubelet[2299]: I0710 01:38:19.263628 2299 status_manager.go:875] "Failed to update status for pod" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3459c244-a1ae-43bc-ad86-239a6e665757\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"DisruptionTarget\\\"},{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-07-10T01:38:18Z\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-07-10T01:38:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}]}}\" for pod \"kube-system\"/\"coredns-7c65d6cfc9-snhl5\": Patch \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5/status\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.266774 kubelet[2299]: W0710 01:38:19.264326 2299 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.266774 kubelet[2299]: E0710 01:38:19.264757 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.266774 kubelet[2299]: W0710 01:38:19.264824 2299 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.266774 kubelet[2299]: E0710 01:38:19.264847 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.266774 kubelet[2299]: W0710 01:38:19.264882 2299 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.266774 kubelet[2299]: E0710 01:38:19.264903 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.266774 kubelet[2299]: W0710 01:38:19.264937 2299 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.266774 kubelet[2299]: E0710 01:38:19.264955 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.266774 kubelet[2299]: W0710 01:38:19.264988 2299 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.266774 kubelet[2299]: E0710 01:38:19.265005 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.277576 env[1363]: time="2025-07-10T01:38:19.277549568Z" level=info msg="shim disconnected" id=915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66 Jul 10 01:38:19.277678 env[1363]: time="2025-07-10T01:38:19.277581161Z" level=warning msg="cleaning up after shim disconnected" id=915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66 namespace=k8s.io Jul 10 01:38:19.277678 env[1363]: time="2025-07-10T01:38:19.277589624Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.278660 kubelet[2299]: W0710 01:38:19.278527 2299 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.278660 kubelet[2299]: E0710 01:38:19.278570 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.278660 kubelet[2299]: W0710 01:38:19.278620 2299 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.278875 kubelet[2299]: E0710 01:38:19.278766 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.279539 kubelet[2299]: W0710 01:38:19.279139 2299 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.279539 kubelet[2299]: E0710 01:38:19.279186 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.279539 kubelet[2299]: W0710 01:38:19.244090 2299 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.279539 kubelet[2299]: E0710 01:38:19.279485 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.282308 kubelet[2299]: E0710 01:38:19.281979 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:19Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:19Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:19Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:19Z\\\",\\\"lastTransitionTime\\\":\\\"2025-07-10T01:38:19Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://139.178.70.102:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.282308 kubelet[2299]: W0710 01:38:19.282115 2299 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.282308 kubelet[2299]: E0710 01:38:19.282142 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.283274 kubelet[2299]: W0710 01:38:19.282386 2299 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.283274 kubelet[2299]: E0710 01:38:19.282413 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.285097 kubelet[2299]: I0710 01:38:19.282532 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.286538 kubelet[2299]: E0710 01:38:19.285133 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.286538 kubelet[2299]: I0710 01:38:19.286203 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.286538 kubelet[2299]: W0710 01:38:19.286342 2299 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.286538 kubelet[2299]: E0710 01:38:19.286358 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.286538 kubelet[2299]: W0710 01:38:19.286391 2299 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:19.286538 kubelet[2299]: E0710 01:38:19.286403 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.286538 kubelet[2299]: E0710 01:38:19.286437 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.286725 kubelet[2299]: E0710 01:38:19.286655 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.287205 kubelet[2299]: I0710 01:38:19.286800 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.287205 kubelet[2299]: I0710 01:38:19.286885 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.287205 kubelet[2299]: E0710 01:38:19.286964 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.287205 kubelet[2299]: E0710 01:38:19.287038 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.287375 env[1363]: time="2025-07-10T01:38:19.287356595Z" level=info msg="CreateContainer within sandbox \"38c6fe2ffb7701339c0787fc0145f3c27d488400622b32132d0a646d4a55bb9b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"398e05089aac7f1a4c583ad60e245a4c0899df35cfb9775954b69e0bcd3068f4\"" Jul 10 01:38:19.288348 kubelet[2299]: E0710 01:38:19.288326 2299 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jul 10 01:38:19.289853 kubelet[2299]: I0710 01:38:19.288372 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.289853 kubelet[2299]: I0710 01:38:19.288573 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.290881 env[1363]: time="2025-07-10T01:38:19.290857842Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7612 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.292616 env[1363]: time="2025-07-10T01:38:19.292592809Z" level=info msg="CreateContainer within sandbox \"f8dcf5beaced1e2365092d211e82d524559009db97d39d280dc1e2449686a212\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"607cefa353481b5bb0ed6ab19a1fbd68872a9a5dc516917c35557fe47df71906\"" Jul 10 01:38:19.296945 env[1363]: time="2025-07-10T01:38:19.296924048Z" level=info msg="StopContainer for \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" returns successfully" Jul 10 01:38:19.297889 env[1363]: time="2025-07-10T01:38:19.297865375Z" level=info msg="shim disconnected" id=846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c Jul 10 01:38:19.297959 env[1363]: time="2025-07-10T01:38:19.297947727Z" level=warning msg="cleaning up after shim disconnected" id=846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c namespace=k8s.io Jul 10 01:38:19.298010 env[1363]: time="2025-07-10T01:38:19.297999761Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.301874 env[1363]: time="2025-07-10T01:38:19.301692164Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7615 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.305800 env[1363]: time="2025-07-10T01:38:19.305769792Z" level=info msg="StopContainer for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" returns successfully" Jul 10 01:38:19.305932 env[1363]: time="2025-07-10T01:38:19.305916127Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7624 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.307746 kubelet[2299]: E0710 01:38:19.307710 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:19.308006 env[1363]: time="2025-07-10T01:38:19.307980482Z" level=info msg="StartContainer for \"398e05089aac7f1a4c583ad60e245a4c0899df35cfb9775954b69e0bcd3068f4\"" Jul 10 01:38:19.308990 kubelet[2299]: I0710 01:38:19.308831 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.309045 env[1363]: time="2025-07-10T01:38:19.308942839Z" level=info msg="CreateContainer within sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:1,}" Jul 10 01:38:19.310282 kubelet[2299]: I0710 01:38:19.310262 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.310802 env[1363]: time="2025-07-10T01:38:19.310778188Z" level=info msg="StopContainer for \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" returns successfully" Jul 10 01:38:19.310969 env[1363]: time="2025-07-10T01:38:19.310954968Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7660 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.313479 kubelet[2299]: I0710 01:38:19.313443 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.313601 kubelet[2299]: I0710 01:38:19.313586 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.313744 kubelet[2299]: I0710 01:38:19.313727 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.313829 kubelet[2299]: I0710 01:38:19.313812 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.313983 kubelet[2299]: I0710 01:38:19.313907 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.314077 kubelet[2299]: I0710 01:38:19.314058 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.314162 kubelet[2299]: I0710 01:38:19.314146 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.314240 kubelet[2299]: I0710 01:38:19.314224 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.314321 kubelet[2299]: I0710 01:38:19.314305 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:19.315380 env[1363]: time="2025-07-10T01:38:19.315358225Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:38:19.315722 env[1363]: time="2025-07-10T01:38:19.315399964Z" level=info msg="Container to stop \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:19.315918 env[1363]: time="2025-07-10T01:38:19.315894805Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:38:19.315968 env[1363]: time="2025-07-10T01:38:19.315930143Z" level=info msg="Container to stop \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:19.316471 env[1363]: time="2025-07-10T01:38:19.316454471Z" level=info msg="StartContainer for \"607cefa353481b5bb0ed6ab19a1fbd68872a9a5dc516917c35557fe47df71906\"" Jul 10 01:38:19.320360 kubelet[2299]: E0710 01:38:19.282419 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{tigera-operator-5bf8dfcb4-twgs2.1850c020102e5a9f tigera-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:tigera-operator,Name:tigera-operator-5bf8dfcb4-twgs2,UID:9c135a1b-00bf-4e6f-87fa-9ac292c6a135,APIVersion:v1,ResourceVersion:382,FieldPath:spec.containers{tigera-operator},},Reason:Pulled,Message:Container image \"quay.io/tigera/operator:v1.38.3\" already present on machine,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,LastTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:38:19.323354 env[1363]: time="2025-07-10T01:38:19.323337615Z" level=info msg="StopContainer for \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" returns successfully" Jul 10 01:38:19.324116 env[1363]: time="2025-07-10T01:38:19.323989011Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:38:19.324227 env[1363]: time="2025-07-10T01:38:19.324213945Z" level=info msg="Container to stop \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:19.324291 env[1363]: time="2025-07-10T01:38:19.324279863Z" level=info msg="Container to stop \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:19.342732 env[1363]: time="2025-07-10T01:38:19.342708775Z" level=info msg="CreateContainer within sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:1,} returns container id \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\"" Jul 10 01:38:19.343889 env[1363]: time="2025-07-10T01:38:19.343870940Z" level=info msg="StartContainer for \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\"" Jul 10 01:38:19.417777 env[1363]: time="2025-07-10T01:38:19.417418727Z" level=info msg="shim disconnected" id=131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3 Jul 10 01:38:19.417777 env[1363]: time="2025-07-10T01:38:19.417452745Z" level=warning msg="cleaning up after shim disconnected" id=131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3 namespace=k8s.io Jul 10 01:38:19.417777 env[1363]: time="2025-07-10T01:38:19.417462467Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.441765 env[1363]: time="2025-07-10T01:38:19.441738546Z" level=info msg="StartContainer for \"607cefa353481b5bb0ed6ab19a1fbd68872a9a5dc516917c35557fe47df71906\" returns successfully" Jul 10 01:38:19.454738 env[1363]: time="2025-07-10T01:38:19.454709610Z" level=info msg="shim disconnected" id=d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742 Jul 10 01:38:19.455328 env[1363]: time="2025-07-10T01:38:19.455315202Z" level=warning msg="cleaning up after shim disconnected" id=d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742 namespace=k8s.io Jul 10 01:38:19.455403 env[1363]: time="2025-07-10T01:38:19.455393058Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.463707 env[1363]: time="2025-07-10T01:38:19.463680706Z" level=info msg="shim disconnected" id=47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff Jul 10 01:38:19.464216 env[1363]: time="2025-07-10T01:38:19.464203370Z" level=warning msg="cleaning up after shim disconnected" id=47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff namespace=k8s.io Jul 10 01:38:19.464284 env[1363]: time="2025-07-10T01:38:19.464273815Z" level=info msg="cleaning up dead shim" Jul 10 01:38:19.473033 env[1363]: time="2025-07-10T01:38:19.473010609Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7837 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.473333 env[1363]: time="2025-07-10T01:38:19.473314629Z" level=info msg="StartContainer for \"398e05089aac7f1a4c583ad60e245a4c0899df35cfb9775954b69e0bcd3068f4\" returns successfully" Jul 10 01:38:19.478050 env[1363]: time="2025-07-10T01:38:19.478030854Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7794 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.485962 env[1363]: time="2025-07-10T01:38:19.485915243Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=7832 runtime=io.containerd.runc.v2\n" Jul 10 01:38:19.528169 env[1363]: time="2025-07-10T01:38:19.527744722Z" level=info msg="StartContainer for \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" returns successfully" Jul 10 01:38:19.771775 systemd[1]: run-containerd-runc-k8s.io-dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9-runc.ZhCsTK.mount: Deactivated successfully. Jul 10 01:38:19.771860 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.771912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.771962 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.772012 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.772059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.772106 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742-shm.mount: Deactivated successfully. Jul 10 01:38:19.772161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.772208 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3-shm.mount: Deactivated successfully. Jul 10 01:38:19.772265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.772316 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff-rootfs.mount: Deactivated successfully. Jul 10 01:38:19.772380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff-shm.mount: Deactivated successfully. Jul 10 01:38:20.078577 env[1363]: time="2025-07-10T01:38:20.078501157Z" level=error msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" failed" error="failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:20.105538 env[1363]: time="2025-07-10T01:38:20.079479219Z" level=error msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" failed" error="failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:20.105538 env[1363]: time="2025-07-10T01:38:20.079801343Z" level=error msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" failed" error="failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:20.305652 kubelet[2299]: E0710 01:38:20.302685 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.319346 kubelet[2299]: E0710 01:38:20.319326 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.320494 kubelet[2299]: E0710 01:38:20.319358 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:38:20.331799 kubelet[2299]: E0710 01:38:20.331762 2299 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.331843 kubelet[2299]: E0710 01:38:20.331800 2299 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.331843 kubelet[2299]: E0710 01:38:20.331817 2299 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.331843 kubelet[2299]: E0710 01:38:20.331833 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.332521 kubelet[2299]: E0710 01:38:20.331848 2299 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.332521 kubelet[2299]: E0710 01:38:20.331862 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.369357 kubelet[2299]: E0710 01:38:20.369336 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.369449 kubelet[2299]: E0710 01:38:20.369385 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.389051 kubelet[2299]: E0710 01:38:20.389033 2299 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.389108 kubelet[2299]: E0710 01:38:20.389070 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.400628 kubelet[2299]: E0710 01:38:20.400611 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.426228 kubelet[2299]: E0710 01:38:20.426205 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.427404 kubelet[2299]: E0710 01:38:20.426233 2299 projected.go:194] Error preparing data for projected volume kube-api-access-r5vvj for pod calico-apiserver/calico-apiserver-6d44674bc4-w2f48: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.432410 kubelet[2299]: E0710 01:38:20.432394 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.432410 kubelet[2299]: E0710 01:38:20.432408 2299 projected.go:194] Error preparing data for projected volume kube-api-access-wpcvh for pod kube-system/kube-proxy-rxvps: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.433544 kubelet[2299]: E0710 01:38:20.432434 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.433544 kubelet[2299]: E0710 01:38:20.432450 2299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.433544 kubelet[2299]: E0710 01:38:20.432463 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.433544 kubelet[2299]: E0710 01:38:20.432476 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.436020 kubelet[2299]: E0710 01:38:20.436007 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.436063 kubelet[2299]: E0710 01:38:20.436023 2299 projected.go:194] Error preparing data for projected volume kube-api-access-4bl2z for pod kube-system/coredns-7c65d6cfc9-4k5ld: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.472853 kubelet[2299]: E0710 01:38:20.472821 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:38:20.472970 kubelet[2299]: E0710 01:38:20.472857 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3"} Jul 10 01:38:20.487272 kubelet[2299]: E0710 01:38:20.324250 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff"} Jul 10 01:38:20.488818 kubelet[2299]: E0710 01:38:20.488793 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:20.488911 kubelet[2299]: E0710 01:38:20.488821 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742"} Jul 10 01:38:20.494298 kubelet[2299]: E0710 01:38:20.494278 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:20.495652 kubelet[2299]: E0710 01:38:20.495630 2299 projected.go:194] Error preparing data for projected volume kube-api-access-47zqf for pod calico-apiserver/calico-apiserver-6d44674bc4-b2wqb: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.533948 kubelet[2299]: E0710 01:38:20.533921 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" Jul 10 01:38:20.536259 kubelet[2299]: E0710 01:38:20.393760 2299 projected.go:194] Error preparing data for projected volume kube-api-access-pwvqb for pod kube-system/coredns-7c65d6cfc9-snhl5: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.717825 kubelet[2299]: E0710 01:38:20.717746 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:20.718669 env[1363]: time="2025-07-10T01:38:20.718214246Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:38:20.758328 kubelet[2299]: E0710 01:38:20.758269 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" Jul 10 01:38:20.831061 kubelet[2299]: W0710 01:38:20.828405 2299 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.831061 kubelet[2299]: E0710 01:38:20.828448 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.831061 kubelet[2299]: W0710 01:38:20.828501 2299 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.831061 kubelet[2299]: E0710 01:38:20.828515 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.831061 kubelet[2299]: E0710 01:38:20.828521 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:20.954085472 +0000 UTC m=+1520.822512359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.833359 kubelet[2299]: W0710 01:38:20.831364 2299 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.833359 kubelet[2299]: E0710 01:38:20.831385 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.850551 kubelet[2299]: W0710 01:38:20.850529 2299 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.851819 kubelet[2299]: E0710 01:38:20.850555 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.913493 kubelet[2299]: W0710 01:38:20.913465 2299 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913505 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913529 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413514868 +0000 UTC m=+1521.281941748 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4bl2z" (UniqueName: "kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.914332 kubelet[2299]: W0710 01:38:20.913593 2299 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913609 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913637 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413631776 +0000 UTC m=+1521.282058655 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-47zqf" (UniqueName: "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913653 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413648503 +0000 UTC m=+1521.282075382 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-pwvqb" (UniqueName: "kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913660 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413656981 +0000 UTC m=+1521.282083860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913666 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413662887 +0000 UTC m=+1521.282089766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913672 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413668932 +0000 UTC m=+1521.282095812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.914332 kubelet[2299]: E0710 01:38:20.913678 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413674897 +0000 UTC m=+1521.282101776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913696 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle podName:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413686754 +0000 UTC m=+1521.282113633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle") pod "calico-kube-controllers-5477ff879d-j2p5q" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913710 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413702174 +0000 UTC m=+1521.282129053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-r5vvj" (UniqueName: "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913717 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413714153 +0000 UTC m=+1521.282141032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wpcvh" (UniqueName: "kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913724 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413721436 +0000 UTC m=+1521.282148315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913731 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413727538 +0000 UTC m=+1521.282154418 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913737 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413733556 +0000 UTC m=+1521.282160435 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913743 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413739691 +0000 UTC m=+1521.282166571 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913749 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413745795 +0000 UTC m=+1521.282172675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.918279 kubelet[2299]: E0710 01:38:20.913756 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413752652 +0000 UTC m=+1521.282179531 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.920095 kubelet[2299]: E0710 01:38:20.913762 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413759421 +0000 UTC m=+1521.282186301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.920095 kubelet[2299]: E0710 01:38:20.913770 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413766091 +0000 UTC m=+1521.282192971 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.920095 kubelet[2299]: E0710 01:38:20.913776 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:21.413772993 +0000 UTC m=+1521.282199872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:20.920095 kubelet[2299]: W0710 01:38:20.913925 2299 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.920095 kubelet[2299]: E0710 01:38:20.913944 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.920095 kubelet[2299]: W0710 01:38:20.913983 2299 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.920095 kubelet[2299]: E0710 01:38:20.913995 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.920095 kubelet[2299]: E0710 01:38:20.914003 2299 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tigera-operator,Image:quay.io/tigera/operator:v1.38.3,Command:[operator],Args:[-manage-crds=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:tigera-operator,ValueFrom:nil,},EnvVar{Name:TIGERA_OPERATOR_INIT_IMAGE_VERSION,Value:v1.38.3,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-lib-calico,ReadOnly:true,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mj7k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tigera-operator-5bf8dfcb4-twgs2_tigera-operator(9c135a1b-00bf-4e6f-87fa-9ac292c6a135): CreateContainerConfigError: failed to sync configmap cache: timed out waiting for the condition" logger="UnhandledError" Jul 10 01:38:20.936613 kubelet[2299]: W0710 01:38:20.936593 2299 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.936674 kubelet[2299]: E0710 01:38:20.936617 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.940495 kubelet[2299]: W0710 01:38:20.940409 2299 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.940570 kubelet[2299]: E0710 01:38:20.940558 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.940675 kubelet[2299]: W0710 01:38:20.940663 2299 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.940741 kubelet[2299]: E0710 01:38:20.940730 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.940835 kubelet[2299]: W0710 01:38:20.940816 2299 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:20.940897 kubelet[2299]: E0710 01:38:20.940886 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:20.948200 kubelet[2299]: E0710 01:38:20.948179 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CreateContainerConfigError: \"failed to sync configmap cache: timed out waiting for the condition\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" Jul 10 01:38:21.081292 env[1363]: time="2025-07-10T01:38:21.081235370Z" level=error msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" failed" error="failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:21.105591 kubelet[2299]: W0710 01:38:21.105568 2299 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:21.105675 kubelet[2299]: E0710 01:38:21.105603 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:21.153953 kubelet[2299]: W0710 01:38:21.153919 2299 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:21.154106 kubelet[2299]: E0710 01:38:21.153958 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:21.196613 kubelet[2299]: W0710 01:38:21.195401 2299 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:21.196613 kubelet[2299]: E0710 01:38:21.195456 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:21.427190 kubelet[2299]: E0710 01:38:21.422371 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:21.428898 kubelet[2299]: E0710 01:38:21.427432 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb"} Jul 10 01:38:21.428898 kubelet[2299]: E0710 01:38:21.427480 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:21.452317 kubelet[2299]: E0710 01:38:21.452291 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/goldmane-58fd7646b9-zxwst" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" Jul 10 01:38:21.549856 kubelet[2299]: I0710 01:38:21.549827 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.549969 kubelet[2299]: I0710 01:38:21.549951 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.550519 kubelet[2299]: I0710 01:38:21.550037 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.550519 kubelet[2299]: I0710 01:38:21.550113 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.550519 kubelet[2299]: I0710 01:38:21.550185 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.550519 kubelet[2299]: I0710 01:38:21.550272 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.550519 kubelet[2299]: I0710 01:38:21.550349 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.550519 kubelet[2299]: I0710 01:38:21.550429 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.550519 kubelet[2299]: I0710 01:38:21.550505 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.552467 kubelet[2299]: I0710 01:38:21.550582 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.616481 kubelet[2299]: I0710 01:38:21.616446 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.653531 kubelet[2299]: I0710 01:38:21.653495 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.690958 kubelet[2299]: E0710 01:38:21.690905 2299 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.188s" Jul 10 01:38:21.716056 kubelet[2299]: I0710 01:38:21.716028 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.718091 kubelet[2299]: I0710 01:38:21.718071 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.737375 kubelet[2299]: I0710 01:38:21.737350 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.740481 kubelet[2299]: I0710 01:38:21.740464 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.740676 kubelet[2299]: I0710 01:38:21.740662 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.740862 kubelet[2299]: I0710 01:38:21.740848 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.749739 kubelet[2299]: I0710 01:38:21.749727 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:38:21.752247 kubelet[2299]: I0710 01:38:21.752233 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.752409 kubelet[2299]: I0710 01:38:21.752396 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.752463 env[1363]: time="2025-07-10T01:38:21.752441606Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:38:21.752519 env[1363]: time="2025-07-10T01:38:21.752484978Z" level=info msg="Container to stop \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:21.752680 kubelet[2299]: I0710 01:38:21.752666 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.753353 kubelet[2299]: I0710 01:38:21.752807 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.753353 kubelet[2299]: I0710 01:38:21.752944 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.753353 kubelet[2299]: I0710 01:38:21.753032 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.753353 kubelet[2299]: I0710 01:38:21.753125 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.753353 kubelet[2299]: I0710 01:38:21.753205 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.753353 kubelet[2299]: I0710 01:38:21.753286 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.753536 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.753681 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.753760 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.753842 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.754012 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.754088 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.754159 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.754463 kubelet[2299]: I0710 01:38:21.754286 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.871023 kubelet[2299]: I0710 01:38:21.870994 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.873792 env[1363]: time="2025-07-10T01:38:21.873759314Z" level=error msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" failed" error="failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:21.874359 kubelet[2299]: I0710 01:38:21.874344 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.874768 kubelet[2299]: E0710 01:38:21.874521 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:38:21.874768 kubelet[2299]: E0710 01:38:21.874541 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3"} Jul 10 01:38:21.874768 kubelet[2299]: E0710 01:38:21.874568 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:21.874768 kubelet[2299]: I0710 01:38:21.874689 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.875011 kubelet[2299]: I0710 01:38:21.874998 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.875164 kubelet[2299]: I0710 01:38:21.875150 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.875601 kubelet[2299]: E0710 01:38:21.875586 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" Jul 10 01:38:21.875759 kubelet[2299]: I0710 01:38:21.875745 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.875892 kubelet[2299]: I0710 01:38:21.875880 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.877751 kubelet[2299]: I0710 01:38:21.876000 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.877751 kubelet[2299]: I0710 01:38:21.876120 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.877751 kubelet[2299]: I0710 01:38:21.876197 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.877751 kubelet[2299]: I0710 01:38:21.876269 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.877751 kubelet[2299]: I0710 01:38:21.876339 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.877751 kubelet[2299]: I0710 01:38:21.876407 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.883598 env[1363]: time="2025-07-10T01:38:21.883479302Z" level=info msg="StopContainer for \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" with timeout 30 (s)" Jul 10 01:38:21.888113 env[1363]: time="2025-07-10T01:38:21.887951126Z" level=info msg="Stop container \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" with signal terminated" Jul 10 01:38:21.918998 kubelet[2299]: I0710 01:38:21.918981 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:21.920193 kubelet[2299]: I0710 01:38:21.920176 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.920336 env[1363]: time="2025-07-10T01:38:21.920316395Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:38:21.920501 env[1363]: time="2025-07-10T01:38:21.920487092Z" level=info msg="Container to stop \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:21.920979 kubelet[2299]: I0710 01:38:21.920965 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:21.999944 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4-rootfs.mount: Deactivated successfully. Jul 10 01:38:22.022902 env[1363]: time="2025-07-10T01:38:22.006869937Z" level=info msg="shim disconnected" id=8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4 Jul 10 01:38:22.022902 env[1363]: time="2025-07-10T01:38:22.006898402Z" level=warning msg="cleaning up after shim disconnected" id=8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4 namespace=k8s.io Jul 10 01:38:22.022902 env[1363]: time="2025-07-10T01:38:22.006904292Z" level=info msg="cleaning up dead shim" Jul 10 01:38:22.022902 env[1363]: time="2025-07-10T01:38:22.010758758Z" level=error msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" failed" error="failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:22.022902 env[1363]: time="2025-07-10T01:38:22.015160215Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8039 runtime=io.containerd.runc.v2\n" Jul 10 01:38:22.022902 env[1363]: time="2025-07-10T01:38:22.019242272Z" level=info msg="StopContainer for \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" returns successfully" Jul 10 01:38:22.136688 env[1363]: time="2025-07-10T01:38:22.134387143Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:38:22.136688 env[1363]: time="2025-07-10T01:38:22.134442310Z" level=info msg="Container to stop \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:22.136688 env[1363]: time="2025-07-10T01:38:22.134453977Z" level=info msg="Container to stop \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:22.139209 kubelet[2299]: I0710 01:38:22.137237 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.138263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b-shm.mount: Deactivated successfully. Jul 10 01:38:22.143079 kubelet[2299]: E0710 01:38:22.143056 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:22.143218 kubelet[2299]: E0710 01:38:22.143203 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742"} Jul 10 01:38:22.151286 kubelet[2299]: I0710 01:38:22.151263 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.151938 env[1363]: time="2025-07-10T01:38:22.151921062Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:38:22.154725 kubelet[2299]: I0710 01:38:22.154700 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.155073 kubelet[2299]: I0710 01:38:22.155060 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.155255 kubelet[2299]: I0710 01:38:22.155240 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.155516 kubelet[2299]: I0710 01:38:22.155503 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.155687 kubelet[2299]: I0710 01:38:22.155673 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.155869 kubelet[2299]: I0710 01:38:22.155839 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.156110 kubelet[2299]: I0710 01:38:22.156097 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.156278 kubelet[2299]: I0710 01:38:22.156263 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.160900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b-rootfs.mount: Deactivated successfully. Jul 10 01:38:22.162419 env[1363]: time="2025-07-10T01:38:22.162390091Z" level=info msg="shim disconnected" id=6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b Jul 10 01:38:22.162419 env[1363]: time="2025-07-10T01:38:22.162417401Z" level=warning msg="cleaning up after shim disconnected" id=6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b namespace=k8s.io Jul 10 01:38:22.162494 env[1363]: time="2025-07-10T01:38:22.162423013Z" level=info msg="cleaning up dead shim" Jul 10 01:38:22.186973 env[1363]: time="2025-07-10T01:38:22.186939631Z" level=error msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" failed" error="failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:22.191635 kubelet[2299]: E0710 01:38:22.191151 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:22.191635 kubelet[2299]: E0710 01:38:22.191182 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb"} Jul 10 01:38:22.191635 kubelet[2299]: E0710 01:38:22.191222 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:22.191635 kubelet[2299]: I0710 01:38:22.191374 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:38:22.192124 kubelet[2299]: I0710 01:38:22.192109 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.192252 kubelet[2299]: E0710 01:38:22.192239 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/goldmane-58fd7646b9-zxwst" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" Jul 10 01:38:22.192539 kubelet[2299]: I0710 01:38:22.192333 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.192539 kubelet[2299]: I0710 01:38:22.192467 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.194457 kubelet[2299]: I0710 01:38:22.192715 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.194457 kubelet[2299]: I0710 01:38:22.192911 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.194457 kubelet[2299]: I0710 01:38:22.193004 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.194457 kubelet[2299]: I0710 01:38:22.193126 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.194457 kubelet[2299]: I0710 01:38:22.193220 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.194457 kubelet[2299]: I0710 01:38:22.193312 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.194599 env[1363]: time="2025-07-10T01:38:22.193050769Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:38:22.194599 env[1363]: time="2025-07-10T01:38:22.193089181Z" level=info msg="Container to stop \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:22.194599 env[1363]: time="2025-07-10T01:38:22.193099245Z" level=info msg="Container to stop \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:22.195406 env[1363]: time="2025-07-10T01:38:22.195387049Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8083 runtime=io.containerd.runc.v2\n" Jul 10 01:38:22.196205 kubelet[2299]: I0710 01:38:22.196183 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.224908 kubelet[2299]: I0710 01:38:22.223859 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.224908 kubelet[2299]: I0710 01:38:22.224003 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.229272 env[1363]: time="2025-07-10T01:38:22.229240073Z" level=error msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" failed" error="failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:22.229808 env[1363]: time="2025-07-10T01:38:22.229783870Z" level=error msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" failed" error="failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:22.231633 kubelet[2299]: E0710 01:38:22.231469 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:38:22.231633 kubelet[2299]: E0710 01:38:22.231499 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff"} Jul 10 01:38:22.231633 kubelet[2299]: E0710 01:38:22.231533 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:22.245787 kubelet[2299]: E0710 01:38:22.245770 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:38:22.245908 kubelet[2299]: E0710 01:38:22.245888 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b"} Jul 10 01:38:22.246720 kubelet[2299]: E0710 01:38:22.245961 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:22.246720 kubelet[2299]: E0710 01:38:22.246131 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" Jul 10 01:38:22.248341 kubelet[2299]: E0710 01:38:22.248325 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" Jul 10 01:38:22.260173 kubelet[2299]: I0710 01:38:22.259052 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.276546 kubelet[2299]: I0710 01:38:22.276530 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.276843 kubelet[2299]: I0710 01:38:22.276828 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.277190 kubelet[2299]: I0710 01:38:22.277168 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.277356 kubelet[2299]: I0710 01:38:22.277329 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.278673 kubelet[2299]: I0710 01:38:22.277514 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.278730 env[1363]: time="2025-07-10T01:38:22.277625052Z" level=info msg="StopContainer for \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" with timeout 30 (s)" Jul 10 01:38:22.281075 env[1363]: time="2025-07-10T01:38:22.281061656Z" level=info msg="Stop container \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" with signal terminated" Jul 10 01:38:22.305850 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b-rootfs.mount: Deactivated successfully. Jul 10 01:38:22.310802 env[1363]: time="2025-07-10T01:38:22.310765383Z" level=info msg="shim disconnected" id=a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b Jul 10 01:38:22.310977 env[1363]: time="2025-07-10T01:38:22.310966489Z" level=warning msg="cleaning up after shim disconnected" id=a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b namespace=k8s.io Jul 10 01:38:22.311938 env[1363]: time="2025-07-10T01:38:22.311927890Z" level=info msg="cleaning up dead shim" Jul 10 01:38:22.319577 env[1363]: time="2025-07-10T01:38:22.319560989Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8148 runtime=io.containerd.runc.v2\n" Jul 10 01:38:22.322897 env[1363]: time="2025-07-10T01:38:22.322881053Z" level=info msg="StopContainer for \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" returns successfully" Jul 10 01:38:22.324671 kubelet[2299]: I0710 01:38:22.324413 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.325657 kubelet[2299]: I0710 01:38:22.325478 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.325657 kubelet[2299]: I0710 01:38:22.325599 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.325800 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.325905 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.325994 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.326117 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.326216 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.326300 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.326383 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.326476 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.326662 kubelet[2299]: I0710 01:38:22.326558 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.328033 kubelet[2299]: I0710 01:38:22.327666 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.328033 kubelet[2299]: I0710 01:38:22.327783 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.328718 kubelet[2299]: I0710 01:38:22.328122 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.328718 kubelet[2299]: I0710 01:38:22.328226 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.328718 kubelet[2299]: I0710 01:38:22.328325 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.328718 kubelet[2299]: I0710 01:38:22.328410 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.328823 env[1363]: time="2025-07-10T01:38:22.328529681Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:38:22.328823 env[1363]: time="2025-07-10T01:38:22.328568141Z" level=info msg="Container to stop \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:22.335011 kubelet[2299]: I0710 01:38:22.334993 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:22.348957 env[1363]: time="2025-07-10T01:38:22.348742223Z" level=info msg="shim disconnected" id=faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856 Jul 10 01:38:22.348957 env[1363]: time="2025-07-10T01:38:22.348769125Z" level=warning msg="cleaning up after shim disconnected" id=faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856 namespace=k8s.io Jul 10 01:38:22.348957 env[1363]: time="2025-07-10T01:38:22.348777141Z" level=info msg="cleaning up dead shim" Jul 10 01:38:22.357896 env[1363]: time="2025-07-10T01:38:22.357874039Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8181 runtime=io.containerd.runc.v2\n" Jul 10 01:38:22.395789 env[1363]: time="2025-07-10T01:38:22.395753685Z" level=error msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" failed" error="failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:22.405185 kubelet[2299]: E0710 01:38:22.405163 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:38:22.406767 kubelet[2299]: E0710 01:38:22.405252 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856"} Jul 10 01:38:22.406767 kubelet[2299]: E0710 01:38:22.405292 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:22.410205 kubelet[2299]: E0710 01:38:22.410173 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" Jul 10 01:38:22.417712 kubelet[2299]: E0710 01:38:22.417693 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.426767 kubelet[2299]: E0710 01:38:22.426749 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.420465159 +0000 UTC m=+1523.288892042 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.551165 kubelet[2299]: E0710 01:38:22.551140 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.551488 kubelet[2299]: E0710 01:38:22.551477 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.551463203 +0000 UTC m=+1523.419890086 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.561149 kubelet[2299]: W0710 01:38:22.559838 2299 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:22.561418 kubelet[2299]: E0710 01:38:22.561221 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:22.584564 sshd[7518]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:22.600000 audit[7518]: USER_END pid=7518 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:22.600000 audit[7518]: CRED_DISP pid=7518 uid=0 auid=500 ses=32 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:22.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@30-139.178.70.102:22-139.178.68.195:44494 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:22.605545 systemd[1]: sshd@30-139.178.70.102:22-139.178.68.195:44494.service: Deactivated successfully. Jul 10 01:38:22.606211 systemd[1]: session-32.scope: Deactivated successfully. Jul 10 01:38:22.606239 systemd-logind[1351]: Session 32 logged out. Waiting for processes to exit. Jul 10 01:38:22.611767 systemd-logind[1351]: Removed session 32. Jul 10 01:38:22.617480 kubelet[2299]: E0710 01:38:22.617456 2299 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.617532 kubelet[2299]: E0710 01:38:22.617512 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.617499913 +0000 UTC m=+1523.485926796 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.624931 kubelet[2299]: E0710 01:38:22.624919 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.625010 kubelet[2299]: E0710 01:38:22.624996 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.625072 kubelet[2299]: E0710 01:38:22.625064 2299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.625136 kubelet[2299]: E0710 01:38:22.625128 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.625198 kubelet[2299]: E0710 01:38:22.625191 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.625261 kubelet[2299]: E0710 01:38:22.625253 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.625324 kubelet[2299]: E0710 01:38:22.625316 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.625958 kubelet[2299]: E0710 01:38:22.625949 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626030 kubelet[2299]: E0710 01:38:22.626018 2299 projected.go:194] Error preparing data for projected volume kube-api-access-47zqf for pod calico-apiserver/calico-apiserver-6d44674bc4-b2wqb: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626100 kubelet[2299]: E0710 01:38:22.625950 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626162 kubelet[2299]: E0710 01:38:22.626155 2299 projected.go:194] Error preparing data for projected volume kube-api-access-r5vvj for pod calico-apiserver/calico-apiserver-6d44674bc4-w2f48: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626214 kubelet[2299]: E0710 01:38:22.625961 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626276 kubelet[2299]: E0710 01:38:22.625000 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.624991928 +0000 UTC m=+1523.493418811 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626348 kubelet[2299]: E0710 01:38:22.626340 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle podName:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626331847 +0000 UTC m=+1523.494758730 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle") pod "calico-kube-controllers-5477ff879d-j2p5q" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626423 kubelet[2299]: E0710 01:38:22.626415 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626405972 +0000 UTC m=+1523.494832855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626501 kubelet[2299]: E0710 01:38:22.626493 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626486526 +0000 UTC m=+1523.494913408 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626573 kubelet[2299]: E0710 01:38:22.626565 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.62655637 +0000 UTC m=+1523.494983252 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.626611 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626605253 +0000 UTC m=+1523.495032135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.626621 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626617661 +0000 UTC m=+1523.495044541 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.626627 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626624097 +0000 UTC m=+1523.495050977 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-47zqf" (UniqueName: "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.626633 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626630105 +0000 UTC m=+1523.495056985 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.625969 2299 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.626664 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626658764 +0000 UTC m=+1523.495085643 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.625974 2299 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.626682 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626678781 +0000 UTC m=+1523.495105661 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.626883 kubelet[2299]: E0710 01:38:22.626706 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.626701686 +0000 UTC m=+1523.495128566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-r5vvj" (UniqueName: "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.635340 kubelet[2299]: E0710 01:38:22.635329 2299 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.635383 kubelet[2299]: E0710 01:38:22.635353 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.635345569 +0000 UTC m=+1523.503772451 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.635445 kubelet[2299]: E0710 01:38:22.635435 2299 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.635513 kubelet[2299]: E0710 01:38:22.635505 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.635498674 +0000 UTC m=+1523.503925557 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:22.639877 kubelet[2299]: W0710 01:38:22.639850 2299 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:22.639918 kubelet[2299]: E0710 01:38:22.639889 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:22.653501 kubelet[2299]: E0710 01:38:22.653491 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.653563 kubelet[2299]: E0710 01:38:22.653555 2299 projected.go:194] Error preparing data for projected volume kube-api-access-wpcvh for pod kube-system/kube-proxy-rxvps: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.653630 kubelet[2299]: E0710 01:38:22.653622 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.653614167 +0000 UTC m=+1523.522041051 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-wpcvh" (UniqueName: "kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.653720 kubelet[2299]: E0710 01:38:22.653503 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.653771 kubelet[2299]: E0710 01:38:22.653763 2299 projected.go:194] Error preparing data for projected volume kube-api-access-4bl2z for pod kube-system/coredns-7c65d6cfc9-4k5ld: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.653838 kubelet[2299]: E0710 01:38:22.653831 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.65382403 +0000 UTC m=+1523.522250913 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4bl2z" (UniqueName: "kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.669051 kubelet[2299]: E0710 01:38:22.669034 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.669051 kubelet[2299]: E0710 01:38:22.669051 2299 projected.go:194] Error preparing data for projected volume kube-api-access-pwvqb for pod kube-system/coredns-7c65d6cfc9-snhl5: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.669139 kubelet[2299]: E0710 01:38:22.669079 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:23.669069617 +0000 UTC m=+1523.537496505 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-pwvqb" (UniqueName: "kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:22.789188 kubelet[2299]: E0710 01:38:22.786844 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{tigera-operator-5bf8dfcb4-twgs2.1850c020102e5a9f tigera-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:tigera-operator,Name:tigera-operator-5bf8dfcb4-twgs2,UID:9c135a1b-00bf-4e6f-87fa-9ac292c6a135,APIVersion:v1,ResourceVersion:382,FieldPath:spec.containers{tigera-operator},},Reason:Pulled,Message:Container image \"quay.io/tigera/operator:v1.38.3\" already present on machine,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,LastTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:38:22.919366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856-rootfs.mount: Deactivated successfully. Jul 10 01:38:22.919470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856-shm.mount: Deactivated successfully. Jul 10 01:38:22.968162 kubelet[2299]: W0710 01:38:22.968119 2299 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:22.968238 kubelet[2299]: E0710 01:38:22.968170 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:22.998632 kubelet[2299]: W0710 01:38:22.998592 2299 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:22.998792 kubelet[2299]: E0710 01:38:22.998777 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.081603 kubelet[2299]: W0710 01:38:23.081559 2299 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.081784 kubelet[2299]: E0710 01:38:23.081766 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.082082 kubelet[2299]: W0710 01:38:23.082055 2299 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.082180 kubelet[2299]: E0710 01:38:23.082151 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.126068 kubelet[2299]: W0710 01:38:23.126040 2299 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.126173 kubelet[2299]: E0710 01:38:23.126157 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.263992 kubelet[2299]: I0710 01:38:23.263666 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:38:23.263992 kubelet[2299]: I0710 01:38:23.263708 2299 scope.go:117] "RemoveContainer" containerID="f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c" Jul 10 01:38:23.263992 kubelet[2299]: I0710 01:38:23.263812 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.263992 kubelet[2299]: I0710 01:38:23.263949 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.264176 kubelet[2299]: I0710 01:38:23.264062 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.264212 kubelet[2299]: I0710 01:38:23.264172 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.264297 kubelet[2299]: I0710 01:38:23.264277 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.264413 kubelet[2299]: I0710 01:38:23.264388 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.264531 kubelet[2299]: I0710 01:38:23.264513 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.264636 kubelet[2299]: I0710 01:38:23.264618 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.264733 env[1363]: time="2025-07-10T01:38:23.264625480Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:38:23.264986 kubelet[2299]: I0710 01:38:23.264922 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.265050 kubelet[2299]: I0710 01:38:23.265031 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.265155 kubelet[2299]: I0710 01:38:23.265136 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.265299 kubelet[2299]: I0710 01:38:23.265280 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.265410 env[1363]: time="2025-07-10T01:38:23.265391297Z" level=info msg="Container to stop \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:23.265500 env[1363]: time="2025-07-10T01:38:23.265483879Z" level=info msg="Container to stop \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:23.266478 kubelet[2299]: I0710 01:38:23.266453 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.268196 kubelet[2299]: I0710 01:38:23.268183 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:38:23.268469 env[1363]: time="2025-07-10T01:38:23.268445167Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:38:23.268582 env[1363]: time="2025-07-10T01:38:23.268494211Z" level=info msg="Container to stop \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:23.268769 kubelet[2299]: I0710 01:38:23.268757 2299 scope.go:117] "RemoveContainer" containerID="1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532" Jul 10 01:38:23.271511 kubelet[2299]: I0710 01:38:23.271497 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.271839 kubelet[2299]: I0710 01:38:23.271812 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.280399 kubelet[2299]: I0710 01:38:23.280380 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.280606 env[1363]: time="2025-07-10T01:38:23.280588450Z" level=info msg="RemoveContainer for \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\"" Jul 10 01:38:23.281056 kubelet[2299]: I0710 01:38:23.281042 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.281980 kubelet[2299]: I0710 01:38:23.281966 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.282130 kubelet[2299]: I0710 01:38:23.282114 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.282266 kubelet[2299]: I0710 01:38:23.282254 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.282398 kubelet[2299]: I0710 01:38:23.282386 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.282530 kubelet[2299]: I0710 01:38:23.282518 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.282675 kubelet[2299]: I0710 01:38:23.282662 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.282799 kubelet[2299]: I0710 01:38:23.282787 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.282927 kubelet[2299]: I0710 01:38:23.282915 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.283065 kubelet[2299]: I0710 01:38:23.283054 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.283758 kubelet[2299]: I0710 01:38:23.283745 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.284049 kubelet[2299]: I0710 01:38:23.284034 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.284197 kubelet[2299]: I0710 01:38:23.284185 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.284331 kubelet[2299]: I0710 01:38:23.284318 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.284464 kubelet[2299]: I0710 01:38:23.284453 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.284620 kubelet[2299]: I0710 01:38:23.284607 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.284793 kubelet[2299]: I0710 01:38:23.284781 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.284940 kubelet[2299]: I0710 01:38:23.284928 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.285512 kubelet[2299]: I0710 01:38:23.285500 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.285680 kubelet[2299]: I0710 01:38:23.285667 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.285822 kubelet[2299]: I0710 01:38:23.285809 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.285956 kubelet[2299]: I0710 01:38:23.285945 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.286081 kubelet[2299]: I0710 01:38:23.286069 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:23.290663 env[1363]: time="2025-07-10T01:38:23.290623983Z" level=error msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" failed" error="failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:23.290892 kubelet[2299]: E0710 01:38:23.290830 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:38:23.290892 kubelet[2299]: E0710 01:38:23.290852 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b"} Jul 10 01:38:23.290892 kubelet[2299]: E0710 01:38:23.290876 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:23.291995 kubelet[2299]: E0710 01:38:23.291966 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" Jul 10 01:38:23.292279 env[1363]: time="2025-07-10T01:38:23.292265409Z" level=info msg="RemoveContainer for \"f9259ae361e3731af85557e5b9606bd5ebae0bba6b9af22c45cecaaa08d4539c\" returns successfully" Jul 10 01:38:23.299316 env[1363]: time="2025-07-10T01:38:23.299290918Z" level=error msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" failed" error="failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:23.299478 kubelet[2299]: E0710 01:38:23.299412 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:38:23.299478 kubelet[2299]: E0710 01:38:23.299435 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856"} Jul 10 01:38:23.299478 kubelet[2299]: E0710 01:38:23.299461 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:23.300571 kubelet[2299]: E0710 01:38:23.300540 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" Jul 10 01:38:23.335337 kubelet[2299]: W0710 01:38:23.335274 2299 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.335337 kubelet[2299]: E0710 01:38:23.335318 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.416557 kubelet[2299]: W0710 01:38:23.416518 2299 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.416737 kubelet[2299]: E0710 01:38:23.416720 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.438797 kubelet[2299]: W0710 01:38:23.438761 2299 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.438863 kubelet[2299]: E0710 01:38:23.438803 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.610962 kubelet[2299]: W0710 01:38:23.610909 2299 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.611257 kubelet[2299]: E0710 01:38:23.610965 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.632126 kubelet[2299]: W0710 01:38:23.632052 2299 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.632126 kubelet[2299]: E0710 01:38:23.632089 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.666252 kubelet[2299]: W0710 01:38:23.666191 2299 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.666252 kubelet[2299]: E0710 01:38:23.666229 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.672852 env[1363]: time="2025-07-10T01:38:23.672819651Z" level=info msg="StopContainer for \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\" with timeout 2 (s)" Jul 10 01:38:23.674929 env[1363]: time="2025-07-10T01:38:23.674908959Z" level=info msg="Stop container \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\" with signal terminated" Jul 10 01:38:23.685755 kubelet[2299]: E0710 01:38:23.685707 2299 kuberuntime_container.go:691] "PreStop hook failed" err="command '/bin/calico-node -shutdown' exited with 137: " pod="calico-system/calico-node-2k6z4" podUID="6367e512-6f46-407d-94e1-a5c573185269" containerName="calico-node" containerID="containerd://dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9" Jul 10 01:38:23.698702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9-rootfs.mount: Deactivated successfully. Jul 10 01:38:23.700097 env[1363]: time="2025-07-10T01:38:23.700071881Z" level=info msg="shim disconnected" id=dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9 Jul 10 01:38:23.700168 env[1363]: time="2025-07-10T01:38:23.700156805Z" level=warning msg="cleaning up after shim disconnected" id=dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9 namespace=k8s.io Jul 10 01:38:23.700230 env[1363]: time="2025-07-10T01:38:23.700220779Z" level=info msg="cleaning up dead shim" Jul 10 01:38:23.704846 env[1363]: time="2025-07-10T01:38:23.704834168Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8280 runtime=io.containerd.runc.v2\n" Jul 10 01:38:23.711315 env[1363]: time="2025-07-10T01:38:23.711300171Z" level=info msg="StopContainer for \"dc40952d28006045e942aa22b5bc381b2f7d35d15ba79973f504ec8ad17ec2d9\" returns successfully" Jul 10 01:38:23.722517 env[1363]: time="2025-07-10T01:38:23.722501743Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Jul 10 01:38:23.730219 kubelet[2299]: W0710 01:38:23.730148 2299 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.730219 kubelet[2299]: E0710 01:38:23.730196 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.735229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3365518783.mount: Deactivated successfully. Jul 10 01:38:23.737436 env[1363]: time="2025-07-10T01:38:23.737414255Z" level=info msg="CreateContainer within sandbox \"9dc4577f9ef3039e231d6f8c765d532b1f3c07ac6e787523cfb69a78230909e1\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"1f6123a2530db4e70f4a3e6b47c035375d25ef2c7098e1ca906af5c39164caa1\"" Jul 10 01:38:23.738325 env[1363]: time="2025-07-10T01:38:23.738303445Z" level=info msg="StartContainer for \"1f6123a2530db4e70f4a3e6b47c035375d25ef2c7098e1ca906af5c39164caa1\"" Jul 10 01:38:23.777558 env[1363]: time="2025-07-10T01:38:23.777535937Z" level=info msg="StartContainer for \"1f6123a2530db4e70f4a3e6b47c035375d25ef2c7098e1ca906af5c39164caa1\" returns successfully" Jul 10 01:38:23.842633 env[1363]: time="2025-07-10T01:38:23.842604948Z" level=info msg="shim disconnected" id=2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30 Jul 10 01:38:23.842859 env[1363]: time="2025-07-10T01:38:23.842845506Z" level=warning msg="cleaning up after shim disconnected" id=2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30 namespace=k8s.io Jul 10 01:38:23.843101 env[1363]: time="2025-07-10T01:38:23.842936691Z" level=info msg="cleaning up dead shim" Jul 10 01:38:23.843168 env[1363]: time="2025-07-10T01:38:23.842930185Z" level=info msg="shim disconnected" id=9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402 Jul 10 01:38:23.844329 env[1363]: time="2025-07-10T01:38:23.843205027Z" level=warning msg="cleaning up after shim disconnected" id=9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402 namespace=k8s.io Jul 10 01:38:23.844329 env[1363]: time="2025-07-10T01:38:23.843213545Z" level=info msg="cleaning up dead shim" Jul 10 01:38:23.849602 env[1363]: time="2025-07-10T01:38:23.849585072Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8355 runtime=io.containerd.runc.v2\n" Jul 10 01:38:23.849952 env[1363]: time="2025-07-10T01:38:23.849938974Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8354 runtime=io.containerd.runc.v2\n" Jul 10 01:38:23.856326 env[1363]: time="2025-07-10T01:38:23.856308405Z" level=info msg="StopContainer for \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\" returns successfully" Jul 10 01:38:23.856898 env[1363]: time="2025-07-10T01:38:23.856885690Z" level=info msg="StopContainer for \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\" returns successfully" Jul 10 01:38:23.858070 env[1363]: time="2025-07-10T01:38:23.858057137Z" level=info msg="CreateContainer within sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" for container &ContainerMetadata{Name:coredns,Attempt:1,}" Jul 10 01:38:23.859381 env[1363]: time="2025-07-10T01:38:23.859363260Z" level=info msg="CreateContainer within sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" for container &ContainerMetadata{Name:coredns,Attempt:1,}" Jul 10 01:38:23.875157 env[1363]: time="2025-07-10T01:38:23.875106448Z" level=info msg="CreateContainer within sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\"" Jul 10 01:38:23.875811 env[1363]: time="2025-07-10T01:38:23.875792077Z" level=info msg="StartContainer for \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\"" Jul 10 01:38:23.876718 env[1363]: time="2025-07-10T01:38:23.876699656Z" level=info msg="CreateContainer within sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" for &ContainerMetadata{Name:coredns,Attempt:1,} returns container id \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\"" Jul 10 01:38:23.876928 env[1363]: time="2025-07-10T01:38:23.876913790Z" level=info msg="StartContainer for \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\"" Jul 10 01:38:23.920315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30-rootfs.mount: Deactivated successfully. Jul 10 01:38:23.920389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402-rootfs.mount: Deactivated successfully. Jul 10 01:38:23.921579 kubelet[2299]: W0710 01:38:23.921423 2299 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:23.921579 kubelet[2299]: E0710 01:38:23.921468 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:23.923129 env[1363]: time="2025-07-10T01:38:23.923110147Z" level=info msg="StartContainer for \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" returns successfully" Jul 10 01:38:23.923362 env[1363]: time="2025-07-10T01:38:23.923350754Z" level=info msg="StartContainer for \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" returns successfully" Jul 10 01:38:24.269699 kubelet[2299]: I0710 01:38:24.269675 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.270051 kubelet[2299]: I0710 01:38:24.270033 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.270282 kubelet[2299]: I0710 01:38:24.270258 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.270409 env[1363]: time="2025-07-10T01:38:24.270387039Z" level=info msg="StopContainer for \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" with timeout 30 (s)" Jul 10 01:38:24.270802 kubelet[2299]: I0710 01:38:24.270784 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.271116 kubelet[2299]: I0710 01:38:24.271099 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.271388 kubelet[2299]: I0710 01:38:24.271369 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.271600 kubelet[2299]: I0710 01:38:24.271582 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.271877 kubelet[2299]: I0710 01:38:24.271859 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.272042 kubelet[2299]: I0710 01:38:24.272030 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.272253 kubelet[2299]: I0710 01:38:24.272241 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.272389 kubelet[2299]: I0710 01:38:24.272377 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.272752 kubelet[2299]: I0710 01:38:24.272739 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.272796 env[1363]: time="2025-07-10T01:38:24.272766485Z" level=info msg="StopContainer for \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" with timeout 30 (s)" Jul 10 01:38:24.272975 kubelet[2299]: I0710 01:38:24.272953 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.273214 kubelet[2299]: I0710 01:38:24.273202 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.273391 kubelet[2299]: I0710 01:38:24.273379 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.273565 env[1363]: time="2025-07-10T01:38:24.273544934Z" level=info msg="Stop container \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" with signal terminated" Jul 10 01:38:24.273775 kubelet[2299]: I0710 01:38:24.273763 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.274029 env[1363]: time="2025-07-10T01:38:24.274010133Z" level=info msg="Stop container \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" with signal terminated" Jul 10 01:38:24.274241 kubelet[2299]: I0710 01:38:24.274228 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.274380 kubelet[2299]: I0710 01:38:24.274368 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.275766 kubelet[2299]: I0710 01:38:24.275752 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.275918 kubelet[2299]: I0710 01:38:24.275906 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.276055 kubelet[2299]: I0710 01:38:24.276044 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.276187 kubelet[2299]: I0710 01:38:24.276173 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.276317 kubelet[2299]: I0710 01:38:24.276304 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.276448 kubelet[2299]: I0710 01:38:24.276436 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.276578 kubelet[2299]: I0710 01:38:24.276566 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.276721 kubelet[2299]: I0710 01:38:24.276709 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.278837 kubelet[2299]: I0710 01:38:24.278818 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.278946 kubelet[2299]: I0710 01:38:24.278930 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279032 kubelet[2299]: I0710 01:38:24.279017 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279113 kubelet[2299]: I0710 01:38:24.279098 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279192 kubelet[2299]: I0710 01:38:24.279177 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279319 kubelet[2299]: I0710 01:38:24.279254 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279380 kubelet[2299]: I0710 01:38:24.279355 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279564 kubelet[2299]: I0710 01:38:24.279473 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279604 kubelet[2299]: I0710 01:38:24.279586 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279698 kubelet[2299]: I0710 01:38:24.279684 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279779 kubelet[2299]: I0710 01:38:24.279765 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279876 kubelet[2299]: I0710 01:38:24.279845 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.279935 kubelet[2299]: I0710 01:38:24.279921 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.281883 kubelet[2299]: I0710 01:38:24.281866 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.281987 kubelet[2299]: I0710 01:38:24.281970 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282071 kubelet[2299]: I0710 01:38:24.282057 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282153 kubelet[2299]: I0710 01:38:24.282139 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282232 kubelet[2299]: I0710 01:38:24.282216 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282309 kubelet[2299]: I0710 01:38:24.282295 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282384 kubelet[2299]: I0710 01:38:24.282370 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282464 kubelet[2299]: I0710 01:38:24.282450 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282539 kubelet[2299]: I0710 01:38:24.282525 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282613 kubelet[2299]: I0710 01:38:24.282600 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282705 kubelet[2299]: I0710 01:38:24.282691 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282780 kubelet[2299]: I0710 01:38:24.282766 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282856 kubelet[2299]: I0710 01:38:24.282842 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:24.282917 env[1363]: time="2025-07-10T01:38:24.282898176Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:38:24.282957 env[1363]: time="2025-07-10T01:38:24.282939190Z" level=info msg="Container to stop \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:24.287313 kubelet[2299]: E0710 01:38:24.287271 2299 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tigera-operator,Image:quay.io/tigera/operator:v1.38.3,Command:[operator],Args:[-manage-crds=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:tigera-operator,ValueFrom:nil,},EnvVar{Name:TIGERA_OPERATOR_INIT_IMAGE_VERSION,Value:v1.38.3,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-lib-calico,ReadOnly:true,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mj7k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tigera-operator-5bf8dfcb4-twgs2_tigera-operator(9c135a1b-00bf-4e6f-87fa-9ac292c6a135): CreateContainerConfigError: failed to sync configmap cache: timed out waiting for the condition" logger="UnhandledError" Jul 10 01:38:24.293142 systemd[1]: run-containerd-runc-k8s.io-1f6123a2530db4e70f4a3e6b47c035375d25ef2c7098e1ca906af5c39164caa1-runc.KO8uwX.mount: Deactivated successfully. Jul 10 01:38:24.294544 kubelet[2299]: E0710 01:38:24.294527 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CreateContainerConfigError: \"failed to sync configmap cache: timed out waiting for the condition\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" Jul 10 01:38:24.321490 env[1363]: time="2025-07-10T01:38:24.321435753Z" level=error msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" failed" error="failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:24.321797 kubelet[2299]: E0710 01:38:24.321708 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:38:24.321797 kubelet[2299]: E0710 01:38:24.321743 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b"} Jul 10 01:38:24.321797 kubelet[2299]: E0710 01:38:24.321775 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:24.322888 kubelet[2299]: E0710 01:38:24.322864 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" Jul 10 01:38:24.490315 kubelet[2299]: E0710 01:38:24.490282 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.490433 kubelet[2299]: E0710 01:38:24.490335 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.49032071 +0000 UTC m=+1526.358747599 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.592544 kubelet[2299]: E0710 01:38:24.591815 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.592544 kubelet[2299]: E0710 01:38:24.592462 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.592447391 +0000 UTC m=+1526.460874283 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.692228 kubelet[2299]: E0710 01:38:24.692212 2299 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.692519 kubelet[2299]: E0710 01:38:24.692507 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.692495612 +0000 UTC m=+1526.560922501 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.692617 kubelet[2299]: E0710 01:38:24.692368 2299 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.692723 kubelet[2299]: E0710 01:38:24.692713 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.692704444 +0000 UTC m=+1526.561131331 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.692813 kubelet[2299]: E0710 01:38:24.692380 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.692887 kubelet[2299]: E0710 01:38:24.692876 2299 projected.go:194] Error preparing data for projected volume kube-api-access-wpcvh for pod kube-system/kube-proxy-rxvps: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692959 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.692949383 +0000 UTC m=+1526.561376271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-wpcvh" (UniqueName: "kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692385 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692391 2299 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692398 2299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692405 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.693028 2299 projected.go:194] Error preparing data for projected volume kube-api-access-47zqf for pod calico-apiserver/calico-apiserver-6d44674bc4-b2wqb: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692418 2299 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692424 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.693057 2299 projected.go:194] Error preparing data for projected volume kube-api-access-pwvqb for pod kube-system/coredns-7c65d6cfc9-snhl5: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692429 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692434 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692440 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692445 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692450 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692458 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.693142 2299 projected.go:194] Error preparing data for projected volume kube-api-access-r5vvj for pod calico-apiserver/calico-apiserver-6d44674bc4-w2f48: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693149 kubelet[2299]: E0710 01:38:24.692464 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.692470 2299 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.692986 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.692979357 +0000 UTC m=+1526.561406240 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693186 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693176289 +0000 UTC m=+1526.561603172 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693196 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693190228 +0000 UTC m=+1526.561617110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693206 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693201667 +0000 UTC m=+1526.561628551 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-47zqf" (UniqueName: "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693216 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693211221 +0000 UTC m=+1526.561638105 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693224 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693219811 +0000 UTC m=+1526.561646694 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-pwvqb" (UniqueName: "kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693236 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693244 2299 projected.go:194] Error preparing data for projected volume kube-api-access-4bl2z for pod kube-system/coredns-7c65d6cfc9-4k5ld: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693258 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693253026 +0000 UTC m=+1526.561679923 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-4bl2z" (UniqueName: "kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.693690 kubelet[2299]: E0710 01:38:24.693269 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693264294 +0000 UTC m=+1526.561691177 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.694197 kubelet[2299]: E0710 01:38:24.693277 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693272713 +0000 UTC m=+1526.561699597 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.694197 kubelet[2299]: E0710 01:38:24.693285 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693280038 +0000 UTC m=+1526.561706921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.694197 kubelet[2299]: E0710 01:38:24.693294 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693289142 +0000 UTC m=+1526.561716024 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:24.694197 kubelet[2299]: E0710 01:38:24.693302 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693298085 +0000 UTC m=+1526.561724969 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.694197 kubelet[2299]: E0710 01:38:24.693310 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693305816 +0000 UTC m=+1526.561732699 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-r5vvj" (UniqueName: "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.694197 kubelet[2299]: E0710 01:38:24.693317 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle podName:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693313572 +0000 UTC m=+1526.561740455 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle") pod "calico-kube-controllers-5477ff879d-j2p5q" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:24.694197 kubelet[2299]: E0710 01:38:24.693327 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:26.693321642 +0000 UTC m=+1526.561748526 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:25.303187 systemd[1]: run-containerd-runc-k8s.io-1f6123a2530db4e70f4a3e6b47c035375d25ef2c7098e1ca906af5c39164caa1-runc.PBFC7G.mount: Deactivated successfully. Jul 10 01:38:25.760590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21-rootfs.mount: Deactivated successfully. Jul 10 01:38:25.762329 env[1363]: time="2025-07-10T01:38:25.762298094Z" level=info msg="shim disconnected" id=9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21 Jul 10 01:38:25.762559 env[1363]: time="2025-07-10T01:38:25.762548408Z" level=warning msg="cleaning up after shim disconnected" id=9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21 namespace=k8s.io Jul 10 01:38:25.762619 env[1363]: time="2025-07-10T01:38:25.762608797Z" level=info msg="cleaning up dead shim" Jul 10 01:38:25.768335 env[1363]: time="2025-07-10T01:38:25.768319316Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8534 runtime=io.containerd.runc.v2\n" Jul 10 01:38:25.773614 env[1363]: time="2025-07-10T01:38:25.773597973Z" level=info msg="StopContainer for \"9a5d4a598e938ac14cd5303eac5f5d043b801c05fe04056375ed6661f862bc21\" returns successfully" Jul 10 01:38:25.775206 env[1363]: time="2025-07-10T01:38:25.775192135Z" level=info msg="CreateContainer within sandbox \"fbd7787e3ba7347a2dbc28cc06ce79390bd9946b44636e633481dc0bb5ca8f11\" for container &ContainerMetadata{Name:calico-typha,Attempt:1,}" Jul 10 01:38:25.788924 env[1363]: time="2025-07-10T01:38:25.788889819Z" level=info msg="CreateContainer within sandbox \"fbd7787e3ba7347a2dbc28cc06ce79390bd9946b44636e633481dc0bb5ca8f11\" for &ContainerMetadata{Name:calico-typha,Attempt:1,} returns container id \"8cfda351e4295c59a352125f6909b60d75a899ad7bb870a4447707b9d82c95e4\"" Jul 10 01:38:25.789218 env[1363]: time="2025-07-10T01:38:25.789205658Z" level=info msg="StartContainer for \"8cfda351e4295c59a352125f6909b60d75a899ad7bb870a4447707b9d82c95e4\"" Jul 10 01:38:25.835180 env[1363]: time="2025-07-10T01:38:25.835152877Z" level=info msg="StartContainer for \"8cfda351e4295c59a352125f6909b60d75a899ad7bb870a4447707b9d82c95e4\" returns successfully" Jul 10 01:38:26.286803 kubelet[2299]: I0710 01:38:26.286768 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287132 kubelet[2299]: I0710 01:38:26.286911 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287132 kubelet[2299]: I0710 01:38:26.287023 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287204 kubelet[2299]: I0710 01:38:26.287128 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287255 kubelet[2299]: I0710 01:38:26.287236 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287360 kubelet[2299]: I0710 01:38:26.287342 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287467 kubelet[2299]: I0710 01:38:26.287448 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287578 kubelet[2299]: I0710 01:38:26.287559 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287705 kubelet[2299]: I0710 01:38:26.287686 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287824 kubelet[2299]: I0710 01:38:26.287802 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.287930 kubelet[2299]: I0710 01:38:26.287912 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.288031 kubelet[2299]: I0710 01:38:26.288013 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.288135 kubelet[2299]: I0710 01:38:26.288116 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:26.300058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2961136402.mount: Deactivated successfully. Jul 10 01:38:26.433057 systemd[1]: run-containerd-runc-k8s.io-1f6123a2530db4e70f4a3e6b47c035375d25ef2c7098e1ca906af5c39164caa1-runc.qIPrtP.mount: Deactivated successfully. Jul 10 01:38:26.882418 kubelet[2299]: W0710 01:38:26.882338 2299 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:26.882418 kubelet[2299]: E0710 01:38:26.882394 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:27.391317 kubelet[2299]: W0710 01:38:27.391276 2299 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:27.391699 kubelet[2299]: E0710 01:38:27.391680 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:27.513636 kubelet[2299]: E0710 01:38:27.513582 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.513855 kubelet[2299]: E0710 01:38:27.513844 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.513827321 +0000 UTC m=+1531.382254207 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-139.178.70.102:22-139.178.68.195:44506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:27.579287 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 10 01:38:27.581735 kernel: audit: type=1130 audit(1752111507.576:677): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-139.178.70.102:22-139.178.68.195:44506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:27.578392 systemd[1]: Started sshd@31-139.178.70.102:22-139.178.68.195:44506.service. Jul 10 01:38:27.614582 kubelet[2299]: E0710 01:38:27.614416 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.614582 kubelet[2299]: E0710 01:38:27.614474 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.614460767 +0000 UTC m=+1531.482887649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.627000 audit[8622]: USER_ACCT pid=8622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.629683 sshd[8622]: Accepted publickey for core from 139.178.68.195 port 44506 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:27.631000 audit[8622]: CRED_ACQ pid=8622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.633218 sshd[8622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:27.635502 kernel: audit: type=1101 audit(1752111507.627:678): pid=8622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.635545 kernel: audit: type=1103 audit(1752111507.631:679): pid=8622 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.637447 kernel: audit: type=1006 audit(1752111507.631:680): pid=8622 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=33 res=1 Jul 10 01:38:27.631000 audit[8622]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6be82070 a2=3 a3=0 items=0 ppid=1 pid=8622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:27.640727 kernel: audit: type=1300 audit(1752111507.631:680): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd6be82070 a2=3 a3=0 items=0 ppid=1 pid=8622 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=33 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:27.631000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:27.641982 kernel: audit: type=1327 audit(1752111507.631:680): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:27.644112 systemd[1]: Started session-33.scope. Jul 10 01:38:27.644674 systemd-logind[1351]: New session 33 of user core. Jul 10 01:38:27.646000 audit[8622]: USER_START pid=8622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.654269 kernel: audit: type=1105 audit(1752111507.646:681): pid=8622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.654300 kernel: audit: type=1103 audit(1752111507.646:682): pid=8625 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.646000 audit[8625]: CRED_ACQ pid=8625 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.700343 kubelet[2299]: W0710 01:38:27.700303 2299 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:27.700496 kubelet[2299]: E0710 01:38:27.700477 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:27.715232 kubelet[2299]: E0710 01:38:27.715218 2299 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.715343 kubelet[2299]: E0710 01:38:27.715331 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.715317924 +0000 UTC m=+1531.583744812 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.715458 kubelet[2299]: E0710 01:38:27.715446 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.715543 kubelet[2299]: E0710 01:38:27.715532 2299 projected.go:194] Error preparing data for projected volume kube-api-access-47zqf for pod calico-apiserver/calico-apiserver-6d44674bc4-b2wqb: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.715628 kubelet[2299]: E0710 01:38:27.715617 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.715606971 +0000 UTC m=+1531.584033860 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-47zqf" (UniqueName: "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.715740 kubelet[2299]: E0710 01:38:27.715683 2299 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.715827 kubelet[2299]: E0710 01:38:27.715817 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.715809382 +0000 UTC m=+1531.584236270 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.715922 kubelet[2299]: E0710 01:38:27.715694 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716006 kubelet[2299]: E0710 01:38:27.715997 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.715989556 +0000 UTC m=+1531.584416444 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716094 kubelet[2299]: E0710 01:38:27.715700 2299 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.716181 kubelet[2299]: E0710 01:38:27.716171 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.71616394 +0000 UTC m=+1531.584590828 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.716264 kubelet[2299]: E0710 01:38:27.715705 2299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716348 kubelet[2299]: E0710 01:38:27.716338 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716331587 +0000 UTC m=+1531.584758476 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716429 kubelet[2299]: E0710 01:38:27.715710 2299 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716515 kubelet[2299]: E0710 01:38:27.716506 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716499019 +0000 UTC m=+1531.584925908 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716604 kubelet[2299]: E0710 01:38:27.715717 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716682 kubelet[2299]: E0710 01:38:27.716671 2299 projected.go:194] Error preparing data for projected volume kube-api-access-4bl2z for pod kube-system/coredns-7c65d6cfc9-4k5ld: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716748 kubelet[2299]: E0710 01:38:27.715721 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716794 kubelet[2299]: E0710 01:38:27.716751 2299 projected.go:194] Error preparing data for projected volume kube-api-access-wpcvh for pod kube-system/kube-proxy-rxvps: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716794 kubelet[2299]: E0710 01:38:27.716780 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716771009 +0000 UTC m=+1531.585197893 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-wpcvh" (UniqueName: "kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716794 kubelet[2299]: E0710 01:38:27.715726 2299 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.716805 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716797788 +0000 UTC m=+1531.585224671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.715731 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.716817 2299 projected.go:194] Error preparing data for projected volume kube-api-access-pwvqb for pod kube-system/coredns-7c65d6cfc9-snhl5: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.716832 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716826897 +0000 UTC m=+1531.585253780 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-pwvqb" (UniqueName: "kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.715736 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.716852 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle podName:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716846738 +0000 UTC m=+1531.585273621 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle") pod "calico-kube-controllers-5477ff879d-j2p5q" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.715741 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.716872 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716866969 +0000 UTC m=+1531.585293853 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.715746 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.716889 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716884811 +0000 UTC m=+1531.585311694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.716905 kubelet[2299]: E0710 01:38:27.715754 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.716910 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716905417 +0000 UTC m=+1531.585332301 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.715759 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.716927 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716922983 +0000 UTC m=+1531.585349866 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.715763 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.716944 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716940257 +0000 UTC m=+1531.585367141 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.715770 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.716954 2299 projected.go:194] Error preparing data for projected volume kube-api-access-r5vvj for pod calico-apiserver/calico-apiserver-6d44674bc4-w2f48: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.717309 kubelet[2299]: E0710 01:38:27.716970 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.716965002 +0000 UTC m=+1531.585391885 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-r5vvj" (UniqueName: "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.717668 kubelet[2299]: E0710 01:38:27.717657 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:31.717646644 +0000 UTC m=+1531.586073532 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-4bl2z" (UniqueName: "kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:27.740952 kubelet[2299]: W0710 01:38:27.740923 2299 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:27.741008 kubelet[2299]: E0710 01:38:27.740957 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:27.770746 sshd[8622]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:27.769000 audit[8622]: USER_END pid=8622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.772267 systemd[1]: sshd@31-139.178.70.102:22-139.178.68.195:44506.service: Deactivated successfully. Jul 10 01:38:27.772746 systemd[1]: session-33.scope: Deactivated successfully. Jul 10 01:38:27.775487 systemd-logind[1351]: Session 33 logged out. Waiting for processes to exit. Jul 10 01:38:27.775652 kernel: audit: type=1106 audit(1752111507.769:683): pid=8622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.778657 kernel: audit: type=1104 audit(1752111507.769:684): pid=8622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.769000 audit[8622]: CRED_DISP pid=8622 uid=0 auid=500 ses=33 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:27.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@31-139.178.70.102:22-139.178.68.195:44506 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:27.778940 systemd-logind[1351]: Removed session 33. Jul 10 01:38:27.864060 kubelet[2299]: W0710 01:38:27.864024 2299 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:27.864183 kubelet[2299]: E0710 01:38:27.864167 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:28.053176 kubelet[2299]: W0710 01:38:28.053136 2299 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:28.053324 kubelet[2299]: E0710 01:38:28.053306 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:28.192988 kubelet[2299]: W0710 01:38:28.192952 2299 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:28.193122 kubelet[2299]: E0710 01:38:28.193109 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:28.253205 kubelet[2299]: W0710 01:38:28.253160 2299 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:28.253449 kubelet[2299]: E0710 01:38:28.253358 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:28.606783 kubelet[2299]: W0710 01:38:28.606699 2299 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:28.606783 kubelet[2299]: E0710 01:38:28.606756 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:28.646216 kubelet[2299]: W0710 01:38:28.646128 2299 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:28.646216 kubelet[2299]: E0710 01:38:28.646186 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:28.890514 kubelet[2299]: W0710 01:38:28.890429 2299 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:28.890784 kubelet[2299]: E0710 01:38:28.890745 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:29.194035 kubelet[2299]: E0710 01:38:29.193964 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.194388 kubelet[2299]: E0710 01:38:29.194367 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.194502 kubelet[2299]: E0710 01:38:29.194485 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.194596 kubelet[2299]: E0710 01:38:29.194582 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.194701 kubelet[2299]: E0710 01:38:29.194687 2299 controller.go:195] "Failed to update lease" err="Put \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.194754 kubelet[2299]: I0710 01:38:29.194708 2299 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 10 01:38:29.194810 kubelet[2299]: E0710 01:38:29.194794 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="200ms" Jul 10 01:38:29.308376 kubelet[2299]: W0710 01:38:29.308297 2299 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:29.308376 kubelet[2299]: E0710 01:38:29.308348 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:29.395974 kubelet[2299]: E0710 01:38:29.395916 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="400ms" Jul 10 01:38:29.440160 kubelet[2299]: W0710 01:38:29.440062 2299 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:29.440160 kubelet[2299]: E0710 01:38:29.440128 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:29.455132 kubelet[2299]: W0710 01:38:29.454987 2299 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:29.455132 kubelet[2299]: E0710 01:38:29.455059 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:29.546024 kubelet[2299]: E0710 01:38:29.546001 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:29Z\\\",\\\"lastTransitionTime\\\":\\\"2025-07-10T01:38:29Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://139.178.70.102:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.546327 kubelet[2299]: E0710 01:38:29.546300 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.546555 kubelet[2299]: E0710 01:38:29.546542 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.546779 kubelet[2299]: E0710 01:38:29.546766 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.547015 kubelet[2299]: E0710 01:38:29.547003 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:29.547103 kubelet[2299]: E0710 01:38:29.547092 2299 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jul 10 01:38:29.796493 kubelet[2299]: E0710 01:38:29.796442 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="800ms" Jul 10 01:38:29.942704 kubelet[2299]: W0710 01:38:29.942662 2299 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:29.942851 kubelet[2299]: E0710 01:38:29.942833 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:30.495658 kubelet[2299]: I0710 01:38:30.495613 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.495810 kubelet[2299]: I0710 01:38:30.495788 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.495927 kubelet[2299]: I0710 01:38:30.495905 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496039 kubelet[2299]: I0710 01:38:30.496018 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496151 kubelet[2299]: I0710 01:38:30.496131 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496260 kubelet[2299]: I0710 01:38:30.496241 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496370 kubelet[2299]: I0710 01:38:30.496350 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496478 kubelet[2299]: I0710 01:38:30.496459 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496584 kubelet[2299]: I0710 01:38:30.496565 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496708 kubelet[2299]: I0710 01:38:30.496687 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496813 kubelet[2299]: I0710 01:38:30.496794 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.496921 kubelet[2299]: I0710 01:38:30.496900 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.497034 kubelet[2299]: I0710 01:38:30.497011 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:30.597356 kubelet[2299]: E0710 01:38:30.597325 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="1.6s" Jul 10 01:38:32.198273 kubelet[2299]: E0710 01:38:32.198242 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="3.2s" Jul 10 01:38:32.549952 kubelet[2299]: E0710 01:38:32.549926 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.550143 kubelet[2299]: E0710 01:38:32.550129 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.550114613 +0000 UTC m=+1540.418541504 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.649917 kubelet[2299]: E0710 01:38:32.649892 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.649992 kubelet[2299]: E0710 01:38:32.649939 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.649929167 +0000 UTC m=+1540.518356054 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.751566 kubelet[2299]: E0710 01:38:32.751528 2299 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.751732 kubelet[2299]: E0710 01:38:32.751595 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.751580934 +0000 UTC m=+1540.620007823 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.751732 kubelet[2299]: E0710 01:38:32.751620 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.751732 kubelet[2299]: E0710 01:38:32.751633 2299 projected.go:194] Error preparing data for projected volume kube-api-access-47zqf for pod calico-apiserver/calico-apiserver-6d44674bc4-b2wqb: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.751732 kubelet[2299]: E0710 01:38:32.751675 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.751666838 +0000 UTC m=+1540.620093721 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-47zqf" (UniqueName: "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.751732 kubelet[2299]: E0710 01:38:32.751694 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.751732 kubelet[2299]: E0710 01:38:32.751713 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.75170767 +0000 UTC m=+1540.620134553 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.751732 kubelet[2299]: E0710 01:38:32.751728 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751748 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.751742228 +0000 UTC m=+1540.620169112 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751759 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751774 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle podName:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.751769436 +0000 UTC m=+1540.620196320 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle") pod "calico-kube-controllers-5477ff879d-j2p5q" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751788 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751797 2299 projected.go:194] Error preparing data for projected volume kube-api-access-r5vvj for pod calico-apiserver/calico-apiserver-6d44674bc4-w2f48: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751814 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.751808261 +0000 UTC m=+1540.620235145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-r5vvj" (UniqueName: "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751826 2299 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.752025 kubelet[2299]: E0710 01:38:32.751841 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.751836662 +0000 UTC m=+1540.620263546 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.752374 kubelet[2299]: E0710 01:38:32.752358 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.752556 kubelet[2299]: E0710 01:38:32.752546 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.752534855 +0000 UTC m=+1540.620961738 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.752678 kubelet[2299]: E0710 01:38:32.752438 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752769 kubelet[2299]: E0710 01:38:32.752759 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.752750584 +0000 UTC m=+1540.621177474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752855 kubelet[2299]: E0710 01:38:32.752449 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.752932 kubelet[2299]: E0710 01:38:32.752921 2299 projected.go:194] Error preparing data for projected volume kube-api-access-4bl2z for pod kube-system/coredns-7c65d6cfc9-4k5ld: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753003 kubelet[2299]: E0710 01:38:32.752456 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753050 kubelet[2299]: E0710 01:38:32.752462 2299 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.753050 kubelet[2299]: E0710 01:38:32.752467 2299 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753050 kubelet[2299]: E0710 01:38:32.752475 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753050 kubelet[2299]: E0710 01:38:32.753044 2299 projected.go:194] Error preparing data for projected volume kube-api-access-pwvqb for pod kube-system/coredns-7c65d6cfc9-snhl5: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753158 kubelet[2299]: E0710 01:38:32.752480 2299 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.753158 kubelet[2299]: E0710 01:38:32.752486 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753158 kubelet[2299]: E0710 01:38:32.753068 2299 projected.go:194] Error preparing data for projected volume kube-api-access-wpcvh for pod kube-system/kube-proxy-rxvps: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753158 kubelet[2299]: E0710 01:38:32.752492 2299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753158 kubelet[2299]: E0710 01:38:32.752498 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753313 kubelet[2299]: E0710 01:38:32.753303 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.75299075 +0000 UTC m=+1540.621417638 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-4bl2z" (UniqueName: "kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753413 kubelet[2299]: E0710 01:38:32.753403 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.753394462 +0000 UTC m=+1540.621821349 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753508 kubelet[2299]: E0710 01:38:32.753498 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.753489593 +0000 UTC m=+1540.621916482 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.753608 kubelet[2299]: E0710 01:38:32.753598 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.753590137 +0000 UTC m=+1540.622017024 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753724 kubelet[2299]: E0710 01:38:32.753714 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.753706275 +0000 UTC m=+1540.622133164 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-pwvqb" (UniqueName: "kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.753823 kubelet[2299]: E0710 01:38:32.753813 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.753804998 +0000 UTC m=+1540.622231886 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:32.753922 kubelet[2299]: E0710 01:38:32.753911 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.753902981 +0000 UTC m=+1540.622329869 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-wpcvh" (UniqueName: "kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.754017 kubelet[2299]: E0710 01:38:32.754006 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.753998343 +0000 UTC m=+1540.622425231 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.754118 kubelet[2299]: E0710 01:38:32.754108 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:40.754099335 +0000 UTC m=+1540.622526222 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:32.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-139.178.70.102:22-139.178.68.195:52330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:32.773570 systemd[1]: Started sshd@32-139.178.70.102:22-139.178.68.195:52330.service. Jul 10 01:38:32.778650 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 01:38:32.778686 kernel: audit: type=1130 audit(1752111512.772:686): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-139.178.70.102:22-139.178.68.195:52330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:32.790101 kubelet[2299]: E0710 01:38:32.790038 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{tigera-operator-5bf8dfcb4-twgs2.1850c020102e5a9f tigera-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:tigera-operator,Name:tigera-operator-5bf8dfcb4-twgs2,UID:9c135a1b-00bf-4e6f-87fa-9ac292c6a135,APIVersion:v1,ResourceVersion:382,FieldPath:spec.containers{tigera-operator},},Reason:Pulled,Message:Container image \"quay.io/tigera/operator:v1.38.3\" already present on machine,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,LastTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:38:32.813000 audit[8638]: USER_ACCT pid=8638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.815131 sshd[8638]: Accepted publickey for core from 139.178.68.195 port 52330 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:32.818954 kernel: audit: type=1101 audit(1752111512.813:687): pid=8638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.818000 audit[8638]: CRED_ACQ pid=8638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.819956 sshd[8638]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:32.824567 kernel: audit: type=1103 audit(1752111512.818:688): pid=8638 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.824601 kernel: audit: type=1006 audit(1752111512.818:689): pid=8638 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=34 res=1 Jul 10 01:38:32.824618 kernel: audit: type=1300 audit(1752111512.818:689): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcff421890 a2=3 a3=0 items=0 ppid=1 pid=8638 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:32.818000 audit[8638]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffcff421890 a2=3 a3=0 items=0 ppid=1 pid=8638 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=34 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:32.826926 systemd[1]: Started session-34.scope. Jul 10 01:38:32.827616 systemd-logind[1351]: New session 34 of user core. Jul 10 01:38:32.818000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:32.828968 kernel: audit: type=1327 audit(1752111512.818:689): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:32.829000 audit[8638]: USER_START pid=8638 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.830000 audit[8641]: CRED_ACQ pid=8641 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.837370 kernel: audit: type=1105 audit(1752111512.829:690): pid=8638 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.837405 kernel: audit: type=1103 audit(1752111512.830:691): pid=8641 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.913756 sshd[8638]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:32.912000 audit[8638]: USER_END pid=8638 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.917000 audit[8638]: CRED_DISP pid=8638 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.920996 kernel: audit: type=1106 audit(1752111512.912:692): pid=8638 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.921038 kernel: audit: type=1104 audit(1752111512.917:693): pid=8638 uid=0 auid=500 ses=34 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:32.921546 systemd-logind[1351]: Session 34 logged out. Waiting for processes to exit. Jul 10 01:38:32.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@32-139.178.70.102:22-139.178.68.195:52330 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:32.922435 systemd[1]: sshd@32-139.178.70.102:22-139.178.68.195:52330.service: Deactivated successfully. Jul 10 01:38:32.922924 systemd[1]: session-34.scope: Deactivated successfully. Jul 10 01:38:32.923960 systemd-logind[1351]: Removed session 34. Jul 10 01:38:33.365226 kubelet[2299]: I0710 01:38:33.365195 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365487 kubelet[2299]: I0710 01:38:33.365302 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365487 kubelet[2299]: I0710 01:38:33.365382 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365487 kubelet[2299]: I0710 01:38:33.365461 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365560 kubelet[2299]: I0710 01:38:33.365541 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365632 kubelet[2299]: I0710 01:38:33.365616 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365725 kubelet[2299]: I0710 01:38:33.365709 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365826 kubelet[2299]: I0710 01:38:33.365810 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.365925 kubelet[2299]: I0710 01:38:33.365908 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366012 kubelet[2299]: I0710 01:38:33.365997 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366094 kubelet[2299]: I0710 01:38:33.366079 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366176 kubelet[2299]: I0710 01:38:33.366161 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366255 kubelet[2299]: I0710 01:38:33.366241 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366344 kubelet[2299]: I0710 01:38:33.366329 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366424 kubelet[2299]: I0710 01:38:33.366409 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366506 kubelet[2299]: I0710 01:38:33.366491 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366603 kubelet[2299]: I0710 01:38:33.366569 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366821 kubelet[2299]: I0710 01:38:33.366715 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366865 kubelet[2299]: I0710 01:38:33.366852 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.366974 kubelet[2299]: I0710 01:38:33.366958 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.367068 kubelet[2299]: I0710 01:38:33.367054 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.367161 kubelet[2299]: I0710 01:38:33.367147 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.367254 kubelet[2299]: I0710 01:38:33.367241 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.367367 kubelet[2299]: I0710 01:38:33.367353 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.367465 kubelet[2299]: I0710 01:38:33.367452 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.367567 kubelet[2299]: I0710 01:38:33.367540 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.703636 kubelet[2299]: I0710 01:38:33.703541 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.704126 kubelet[2299]: I0710 01:38:33.704107 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.704375 kubelet[2299]: I0710 01:38:33.704358 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.704597 kubelet[2299]: I0710 01:38:33.704579 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.705016 kubelet[2299]: I0710 01:38:33.704998 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.705282 kubelet[2299]: I0710 01:38:33.705266 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.705516 kubelet[2299]: I0710 01:38:33.705499 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.705780 kubelet[2299]: I0710 01:38:33.705764 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.706024 kubelet[2299]: I0710 01:38:33.706000 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.706238 kubelet[2299]: I0710 01:38:33.706222 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.706455 kubelet[2299]: I0710 01:38:33.706439 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.706777 kubelet[2299]: I0710 01:38:33.706751 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.707049 kubelet[2299]: I0710 01:38:33.707033 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.707317 kubelet[2299]: I0710 01:38:33.707301 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.707552 kubelet[2299]: I0710 01:38:33.707535 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.707818 kubelet[2299]: I0710 01:38:33.707801 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.708066 kubelet[2299]: I0710 01:38:33.708054 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.708223 kubelet[2299]: I0710 01:38:33.708211 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.708382 kubelet[2299]: I0710 01:38:33.708371 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.708562 kubelet[2299]: I0710 01:38:33.708550 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.708717 kubelet[2299]: I0710 01:38:33.708706 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.709592 kubelet[2299]: I0710 01:38:33.709576 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.709847 kubelet[2299]: I0710 01:38:33.709833 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.710029 kubelet[2299]: I0710 01:38:33.710017 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.710389 kubelet[2299]: I0710 01:38:33.710376 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:33.710541 kubelet[2299]: I0710 01:38:33.710525 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.165103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a-rootfs.mount: Deactivated successfully. Jul 10 01:38:34.165184 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591-rootfs.mount: Deactivated successfully. Jul 10 01:38:34.173608 env[1363]: time="2025-07-10T01:38:34.173570895Z" level=info msg="shim disconnected" id=1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591 Jul 10 01:38:34.173913 env[1363]: time="2025-07-10T01:38:34.173902164Z" level=warning msg="cleaning up after shim disconnected" id=1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591 namespace=k8s.io Jul 10 01:38:34.173968 env[1363]: time="2025-07-10T01:38:34.173957421Z" level=info msg="cleaning up dead shim" Jul 10 01:38:34.174155 env[1363]: time="2025-07-10T01:38:34.173973989Z" level=info msg="shim disconnected" id=225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a Jul 10 01:38:34.174211 env[1363]: time="2025-07-10T01:38:34.174199141Z" level=warning msg="cleaning up after shim disconnected" id=225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a namespace=k8s.io Jul 10 01:38:34.174258 env[1363]: time="2025-07-10T01:38:34.174249006Z" level=info msg="cleaning up dead shim" Jul 10 01:38:34.178806 env[1363]: time="2025-07-10T01:38:34.178791916Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8681 runtime=io.containerd.runc.v2\n" Jul 10 01:38:34.179487 env[1363]: time="2025-07-10T01:38:34.179471692Z" level=info msg="StopContainer for \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" returns successfully" Jul 10 01:38:34.179826 env[1363]: time="2025-07-10T01:38:34.179813403Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:38:34.179949 env[1363]: time="2025-07-10T01:38:34.179936678Z" level=info msg="Container to stop \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.180003 env[1363]: time="2025-07-10T01:38:34.179991505Z" level=info msg="Container to stop \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.181683 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319-shm.mount: Deactivated successfully. Jul 10 01:38:34.184553 env[1363]: time="2025-07-10T01:38:34.184539278Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8682 runtime=io.containerd.runc.v2\n" Jul 10 01:38:34.185321 env[1363]: time="2025-07-10T01:38:34.185307550Z" level=info msg="StopContainer for \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" returns successfully" Jul 10 01:38:34.185668 env[1363]: time="2025-07-10T01:38:34.185598021Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:38:34.185711 env[1363]: time="2025-07-10T01:38:34.185686005Z" level=info msg="Container to stop \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.185738 env[1363]: time="2025-07-10T01:38:34.185709041Z" level=info msg="Container to stop \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.187260 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714-shm.mount: Deactivated successfully. Jul 10 01:38:34.203254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319-rootfs.mount: Deactivated successfully. Jul 10 01:38:34.204484 env[1363]: time="2025-07-10T01:38:34.204380439Z" level=info msg="shim disconnected" id=5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319 Jul 10 01:38:34.204484 env[1363]: time="2025-07-10T01:38:34.204433159Z" level=warning msg="cleaning up after shim disconnected" id=5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319 namespace=k8s.io Jul 10 01:38:34.204484 env[1363]: time="2025-07-10T01:38:34.204439802Z" level=info msg="cleaning up dead shim" Jul 10 01:38:34.211924 env[1363]: time="2025-07-10T01:38:34.211899254Z" level=info msg="shim disconnected" id=47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714 Jul 10 01:38:34.212187 env[1363]: time="2025-07-10T01:38:34.212174677Z" level=warning msg="cleaning up after shim disconnected" id=47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714 namespace=k8s.io Jul 10 01:38:34.212402 env[1363]: time="2025-07-10T01:38:34.212392280Z" level=info msg="cleaning up dead shim" Jul 10 01:38:34.212700 env[1363]: time="2025-07-10T01:38:34.212683152Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8743 runtime=io.containerd.runc.v2\n" Jul 10 01:38:34.220725 env[1363]: time="2025-07-10T01:38:34.220697005Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=8763 runtime=io.containerd.runc.v2\n" Jul 10 01:38:34.238446 env[1363]: time="2025-07-10T01:38:34.238398674Z" level=error msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" failed" error="failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.238584 kubelet[2299]: E0710 01:38:34.238559 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:38:34.238672 kubelet[2299]: E0710 01:38:34.238592 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319"} Jul 10 01:38:34.238672 kubelet[2299]: E0710 01:38:34.238623 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:34.239692 kubelet[2299]: E0710 01:38:34.239669 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-4k5ld" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" Jul 10 01:38:34.248204 env[1363]: time="2025-07-10T01:38:34.248169901Z" level=error msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" failed" error="failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.248326 kubelet[2299]: E0710 01:38:34.248297 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:34.248390 kubelet[2299]: E0710 01:38:34.248332 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714"} Jul 10 01:38:34.248390 kubelet[2299]: E0710 01:38:34.248358 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:34.249575 kubelet[2299]: E0710 01:38:34.249528 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-snhl5" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" Jul 10 01:38:34.299608 kubelet[2299]: I0710 01:38:34.299580 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:38:34.299608 kubelet[2299]: I0710 01:38:34.299611 2299 scope.go:117] "RemoveContainer" containerID="9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402" Jul 10 01:38:34.300091 kubelet[2299]: I0710 01:38:34.300065 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.300179 env[1363]: time="2025-07-10T01:38:34.300080555Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:38:34.300251 kubelet[2299]: I0710 01:38:34.300231 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.300354 env[1363]: time="2025-07-10T01:38:34.300336878Z" level=info msg="Container to stop \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.300409 kubelet[2299]: I0710 01:38:34.300380 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.300480 env[1363]: time="2025-07-10T01:38:34.300463690Z" level=info msg="Container to stop \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.300534 kubelet[2299]: I0710 01:38:34.300507 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.300676 kubelet[2299]: I0710 01:38:34.300649 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.300826 kubelet[2299]: I0710 01:38:34.300800 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.300946 kubelet[2299]: I0710 01:38:34.300925 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.301091 kubelet[2299]: I0710 01:38:34.301064 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.301209 kubelet[2299]: I0710 01:38:34.301188 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.301348 kubelet[2299]: I0710 01:38:34.301321 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.301489 kubelet[2299]: I0710 01:38:34.301467 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.301634 kubelet[2299]: I0710 01:38:34.301615 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.301775 kubelet[2299]: I0710 01:38:34.301755 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.303003 env[1363]: time="2025-07-10T01:38:34.302986653Z" level=info msg="RemoveContainer for \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\"" Jul 10 01:38:34.304170 kubelet[2299]: I0710 01:38:34.304152 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.304267 kubelet[2299]: I0710 01:38:34.304252 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.304335 env[1363]: time="2025-07-10T01:38:34.304321701Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:38:34.304374 kubelet[2299]: I0710 01:38:34.304361 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.304443 kubelet[2299]: I0710 01:38:34.304431 2299 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:34.304486 kubelet[2299]: I0710 01:38:34.304468 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.304529 env[1363]: time="2025-07-10T01:38:34.304431296Z" level=info msg="Container to stop \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.304578 kubelet[2299]: I0710 01:38:34.304564 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.304649 env[1363]: time="2025-07-10T01:38:34.304624972Z" level=info msg="Container to stop \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.304845 kubelet[2299]: I0710 01:38:34.304746 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.304927 kubelet[2299]: I0710 01:38:34.304912 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.305042 kubelet[2299]: I0710 01:38:34.305028 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.305235 kubelet[2299]: I0710 01:38:34.305144 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.305274 kubelet[2299]: I0710 01:38:34.305259 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.305390 kubelet[2299]: I0710 01:38:34.305366 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.305570 kubelet[2299]: I0710 01:38:34.305480 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.305608 kubelet[2299]: I0710 01:38:34.305595 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.306087 env[1363]: time="2025-07-10T01:38:34.306070036Z" level=info msg="RemoveContainer for \"9f554c7a1a1192bf8f33530ae0b697d908ab3fedeb5044bf3f3dc34eb3189402\" returns successfully" Jul 10 01:38:34.306151 kubelet[2299]: I0710 01:38:34.306139 2299 scope.go:117] "RemoveContainer" containerID="2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30" Jul 10 01:38:34.306887 env[1363]: time="2025-07-10T01:38:34.306868878Z" level=info msg="RemoveContainer for \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\"" Jul 10 01:38:34.308184 env[1363]: time="2025-07-10T01:38:34.308164258Z" level=info msg="RemoveContainer for \"2c9a852303586e6248b136709c2283dd38b0cb347056e0f9d8aa77a5eb662d30\" returns successfully" Jul 10 01:38:34.326690 env[1363]: time="2025-07-10T01:38:34.326650079Z" level=error msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" failed" error="failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.326968 kubelet[2299]: E0710 01:38:34.326923 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:38:34.327048 kubelet[2299]: E0710 01:38:34.326974 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319"} Jul 10 01:38:34.327048 kubelet[2299]: E0710 01:38:34.327008 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:34.328114 kubelet[2299]: E0710 01:38:34.328097 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-4k5ld" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" Jul 10 01:38:34.329342 env[1363]: time="2025-07-10T01:38:34.329314388Z" level=error msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" failed" error="failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.329497 kubelet[2299]: E0710 01:38:34.329432 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:34.329497 kubelet[2299]: E0710 01:38:34.329457 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714"} Jul 10 01:38:34.329497 kubelet[2299]: E0710 01:38:34.329479 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:34.330658 kubelet[2299]: E0710 01:38:34.330624 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-snhl5" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" Jul 10 01:38:34.332015 kubelet[2299]: W0710 01:38:34.331943 2299 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:34.332053 kubelet[2299]: E0710 01:38:34.332024 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dtigera-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:34.495891 kubelet[2299]: I0710 01:38:34.495776 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.497726 kubelet[2299]: I0710 01:38:34.497561 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.497767 kubelet[2299]: I0710 01:38:34.497720 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498110 kubelet[2299]: I0710 01:38:34.497834 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498110 kubelet[2299]: I0710 01:38:34.497965 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498110 kubelet[2299]: I0710 01:38:34.498090 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498923 kubelet[2299]: I0710 01:38:34.498321 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498923 kubelet[2299]: I0710 01:38:34.498474 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498923 kubelet[2299]: I0710 01:38:34.498591 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498923 kubelet[2299]: I0710 01:38:34.498767 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.498923 kubelet[2299]: I0710 01:38:34.498911 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.499086 env[1363]: time="2025-07-10T01:38:34.498340867Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:38:34.499086 env[1363]: time="2025-07-10T01:38:34.498395625Z" level=info msg="Container to stop \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.499160 kubelet[2299]: I0710 01:38:34.499044 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.499194 kubelet[2299]: I0710 01:38:34.499160 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.499317 kubelet[2299]: I0710 01:38:34.499283 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.499428 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.499563 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.499707 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.499820 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.499929 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.500041 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.500116 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.500192 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.500266 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.500339 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.500411 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500520 kubelet[2299]: I0710 01:38:34.500485 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:34.500830 env[1363]: time="2025-07-10T01:38:34.499925850Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:38:34.500830 env[1363]: time="2025-07-10T01:38:34.499961035Z" level=info msg="Container to stop \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.500830 env[1363]: time="2025-07-10T01:38:34.500285137Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:38:34.500830 env[1363]: time="2025-07-10T01:38:34.500310261Z" level=info msg="Container to stop \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:34.520046 env[1363]: time="2025-07-10T01:38:34.520011799Z" level=error msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" failed" error="failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.520391 kubelet[2299]: E0710 01:38:34.520284 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:38:34.520391 kubelet[2299]: E0710 01:38:34.520325 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856"} Jul 10 01:38:34.520391 kubelet[2299]: E0710 01:38:34.520360 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:34.520943 env[1363]: time="2025-07-10T01:38:34.520917125Z" level=error msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" failed" error="failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.521124 kubelet[2299]: E0710 01:38:34.521046 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:38:34.521124 kubelet[2299]: E0710 01:38:34.521068 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3"} Jul 10 01:38:34.521124 kubelet[2299]: E0710 01:38:34.521104 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:34.522226 kubelet[2299]: E0710 01:38:34.522188 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" Jul 10 01:38:34.522226 kubelet[2299]: E0710 01:38:34.522213 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" Jul 10 01:38:34.533328 env[1363]: time="2025-07-10T01:38:34.533282602Z" level=error msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" failed" error="failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.533589 kubelet[2299]: E0710 01:38:34.533477 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:34.533589 kubelet[2299]: E0710 01:38:34.533502 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742"} Jul 10 01:38:34.533712 env[1363]: time="2025-07-10T01:38:34.533673537Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:38:34.550378 env[1363]: time="2025-07-10T01:38:34.550343843Z" level=error msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" failed" error="failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:34.550658 kubelet[2299]: E0710 01:38:34.550548 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:34.550658 kubelet[2299]: E0710 01:38:34.550588 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb"} Jul 10 01:38:34.550658 kubelet[2299]: E0710 01:38:34.550618 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:34.551830 kubelet[2299]: E0710 01:38:34.551778 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/goldmane-58fd7646b9-zxwst" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" Jul 10 01:38:35.164265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714-rootfs.mount: Deactivated successfully. Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.306373 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.306509 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.306623 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.306761 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.306873 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.306980 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.307100 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.307276 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.307389 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.307564 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.307685 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.307864 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.307977 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.308282 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.308397 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.308502 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.308661 kubelet[2299]: I0710 01:38:35.308608 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.309217 env[1363]: time="2025-07-10T01:38:35.306744176Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:38:35.309217 env[1363]: time="2025-07-10T01:38:35.306787010Z" level=info msg="Container to stop \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:35.309217 env[1363]: time="2025-07-10T01:38:35.308313602Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:38:35.309217 env[1363]: time="2025-07-10T01:38:35.308346425Z" level=info msg="Container to stop \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.309698 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.309845 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.310009 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.310107 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.310201 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.310310 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.310401 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.310490 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.310593 kubelet[2299]: I0710 01:38:35.310581 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:35.336804 env[1363]: time="2025-07-10T01:38:35.336759123Z" level=error msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" failed" error="failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:35.337067 kubelet[2299]: E0710 01:38:35.336980 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:38:35.337067 kubelet[2299]: E0710 01:38:35.337013 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319"} Jul 10 01:38:35.337067 kubelet[2299]: E0710 01:38:35.337045 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:35.338164 kubelet[2299]: E0710 01:38:35.338131 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-4k5ld" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" Jul 10 01:38:35.338852 env[1363]: time="2025-07-10T01:38:35.338827108Z" level=error msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" failed" error="failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:35.338998 kubelet[2299]: E0710 01:38:35.338940 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:35.338998 kubelet[2299]: E0710 01:38:35.338959 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714"} Jul 10 01:38:35.338998 kubelet[2299]: E0710 01:38:35.338980 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:35.340085 kubelet[2299]: E0710 01:38:35.340059 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-snhl5" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" Jul 10 01:38:35.399054 kubelet[2299]: E0710 01:38:35.399022 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="6.4s" Jul 10 01:38:35.494220 kubelet[2299]: W0710 01:38:35.494093 2299 reflector.go:561] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:35.494220 kubelet[2299]: E0710 01:38:35.494153 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dgoldmane-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:36.108346 kubelet[2299]: W0710 01:38:36.108306 2299 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:36.108726 kubelet[2299]: E0710 01:38:36.108706 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:36.640363 kubelet[2299]: W0710 01:38:36.640279 2299 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:36.640363 kubelet[2299]: E0710 01:38:36.640335 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dwhisker-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:36.750302 kubelet[2299]: W0710 01:38:36.750265 2299 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:36.750432 kubelet[2299]: E0710 01:38:36.750415 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkubernetes-services-endpoint&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:37.185591 kubelet[2299]: W0710 01:38:37.185544 2299 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:37.186275 kubelet[2299]: E0710 01:38:37.185945 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:37.495285 kubelet[2299]: I0710 01:38:37.495206 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.495692 kubelet[2299]: I0710 01:38:37.495673 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.495910 kubelet[2299]: I0710 01:38:37.495895 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.496045 env[1363]: time="2025-07-10T01:38:37.496016095Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:38:37.496258 env[1363]: time="2025-07-10T01:38:37.496075267Z" level=info msg="Container to stop \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:37.496258 env[1363]: time="2025-07-10T01:38:37.496087663Z" level=info msg="Container to stop \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:37.496318 env[1363]: time="2025-07-10T01:38:37.496297745Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:38:37.496347 env[1363]: time="2025-07-10T01:38:37.496336711Z" level=info msg="Container to stop \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:37.496430 kubelet[2299]: I0710 01:38:37.496414 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.496612 kubelet[2299]: I0710 01:38:37.496596 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.496812 kubelet[2299]: I0710 01:38:37.496798 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.497006 kubelet[2299]: I0710 01:38:37.496991 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.497158 kubelet[2299]: I0710 01:38:37.497145 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.497314 kubelet[2299]: I0710 01:38:37.497301 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.497464 kubelet[2299]: I0710 01:38:37.497451 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.497617 kubelet[2299]: I0710 01:38:37.497603 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.497779 kubelet[2299]: I0710 01:38:37.497766 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.497951 kubelet[2299]: I0710 01:38:37.497938 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.499115 kubelet[2299]: I0710 01:38:37.499092 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.499920 kubelet[2299]: I0710 01:38:37.499905 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.500118 kubelet[2299]: I0710 01:38:37.500104 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.500317 kubelet[2299]: I0710 01:38:37.500290 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.500450 kubelet[2299]: I0710 01:38:37.500435 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.500632 kubelet[2299]: I0710 01:38:37.500618 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.500836 kubelet[2299]: I0710 01:38:37.500812 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.500980 kubelet[2299]: I0710 01:38:37.500966 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.501138 kubelet[2299]: I0710 01:38:37.501125 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.501301 kubelet[2299]: I0710 01:38:37.501284 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.501433 kubelet[2299]: I0710 01:38:37.501421 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.501589 kubelet[2299]: I0710 01:38:37.501577 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.501827 kubelet[2299]: I0710 01:38:37.501810 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:37.518349 env[1363]: time="2025-07-10T01:38:37.518305894Z" level=error msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" failed" error="failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:37.518479 kubelet[2299]: E0710 01:38:37.518444 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:38:37.518558 kubelet[2299]: E0710 01:38:37.518481 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff"} Jul 10 01:38:37.518558 kubelet[2299]: E0710 01:38:37.518515 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:37.518840 env[1363]: time="2025-07-10T01:38:37.518816346Z" level=error msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" failed" error="failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:37.519020 kubelet[2299]: E0710 01:38:37.518958 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:38:37.519020 kubelet[2299]: E0710 01:38:37.518980 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b"} Jul 10 01:38:37.519020 kubelet[2299]: E0710 01:38:37.519000 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:37.520133 kubelet[2299]: E0710 01:38:37.520087 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" Jul 10 01:38:37.520133 kubelet[2299]: E0710 01:38:37.520117 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" Jul 10 01:38:37.672584 kubelet[2299]: W0710 01:38:37.672506 2299 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:37.672584 kubelet[2299]: E0710 01:38:37.672560 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dwhisker-backend-key-pair&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:37.878212 kubelet[2299]: W0710 01:38:37.878109 2299 reflector.go:561] object-"calico-system"/"goldmane": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:37.878212 kubelet[2299]: E0710 01:38:37.878176 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:37.916347 systemd[1]: Started sshd@33-139.178.70.102:22-139.178.68.195:52344.service. Jul 10 01:38:37.917566 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 01:38:37.917603 kernel: audit: type=1130 audit(1752111517.914:695): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-139.178.70.102:22-139.178.68.195:52344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:37.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-139.178.70.102:22-139.178.68.195:52344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:37.955895 sshd[8966]: Accepted publickey for core from 139.178.68.195 port 52344 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:37.954000 audit[8966]: USER_ACCT pid=8966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.959653 kernel: audit: type=1101 audit(1752111517.954:696): pid=8966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.958000 audit[8966]: CRED_ACQ pid=8966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.960164 sshd[8966]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:37.965578 kernel: audit: type=1103 audit(1752111517.958:697): pid=8966 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.965619 kernel: audit: type=1006 audit(1752111517.958:698): pid=8966 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=35 res=1 Jul 10 01:38:37.965675 kernel: audit: type=1300 audit(1752111517.958:698): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefa4ee1f0 a2=3 a3=0 items=0 ppid=1 pid=8966 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:37.958000 audit[8966]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffefa4ee1f0 a2=3 a3=0 items=0 ppid=1 pid=8966 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=35 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:37.958000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:37.970620 kernel: audit: type=1327 audit(1752111517.958:698): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:37.971727 systemd-logind[1351]: New session 35 of user core. Jul 10 01:38:37.972044 systemd[1]: Started session-35.scope. Jul 10 01:38:37.973000 audit[8966]: USER_START pid=8966 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.977000 audit[8969]: CRED_ACQ pid=8969 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.982287 kernel: audit: type=1105 audit(1752111517.973:699): pid=8966 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.982325 kernel: audit: type=1103 audit(1752111517.977:700): pid=8969 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:37.991866 kubelet[2299]: W0710 01:38:37.991808 2299 reflector.go:561] object-"calico-system"/"node-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:37.991866 kubelet[2299]: E0710 01:38:37.991847 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dnode-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:38.059068 sshd[8966]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:38.058000 audit[8966]: USER_END pid=8966 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:38.063655 kernel: audit: type=1106 audit(1752111518.058:701): pid=8966 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:38.063767 systemd[1]: sshd@33-139.178.70.102:22-139.178.68.195:52344.service: Deactivated successfully. Jul 10 01:38:38.064248 systemd[1]: session-35.scope: Deactivated successfully. Jul 10 01:38:38.058000 audit[8966]: CRED_DISP pid=8966 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:38.064733 systemd-logind[1351]: Session 35 logged out. Waiting for processes to exit. Jul 10 01:38:38.067699 kernel: audit: type=1104 audit(1752111518.058:702): pid=8966 uid=0 auid=500 ses=35 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:38.062000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@33-139.178.70.102:22-139.178.68.195:52344 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:38.068067 systemd-logind[1351]: Removed session 35. Jul 10 01:38:38.601391 kubelet[2299]: W0710 01:38:38.601338 2299 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:38.601769 kubelet[2299]: E0710 01:38:38.601748 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/secrets?fieldSelector=metadata.name%3Dcalico-apiserver-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:38.994426 kubelet[2299]: W0710 01:38:38.994329 2299 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:38.994685 kubelet[2299]: E0710 01:38:38.994635 2299 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:39.006242 kubelet[2299]: W0710 01:38:39.006217 2299 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:39.006340 kubelet[2299]: E0710 01:38:39.006324 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dcoredns&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:39.077497 kubelet[2299]: W0710 01:38:39.077468 2299 reflector.go:561] object-"calico-system"/"typha-certs": failed to list *v1.Secret: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:39.077613 kubelet[2299]: E0710 01:38:39.077597 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"typha-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/secrets?fieldSelector=metadata.name%3Dtypha-certs&resourceVersion=612\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:39.364399 kubelet[2299]: W0710 01:38:39.364351 2299 reflector.go:561] object-"calico-system"/"goldmane-ca-bundle": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:39.364586 kubelet[2299]: E0710 01:38:39.364569 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/configmaps?fieldSelector=metadata.name%3Dgoldmane-ca-bundle&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:39.494329 kubelet[2299]: I0710 01:38:39.494296 2299 scope.go:117] "RemoveContainer" containerID="1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532" Jul 10 01:38:39.838146 kubelet[2299]: E0710 01:38:39.838110 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:39Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:39Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:39Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-07-10T01:38:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-07-10T01:38:39Z\\\",\\\"message\\\":\\\"kubelet is posting ready status\\\",\\\"reason\\\":\\\"KubeletReady\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"localhost\": Patch \"https://139.178.70.102:6443/api/v1/nodes/localhost/status?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:39.838474 kubelet[2299]: E0710 01:38:39.838258 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:39.838474 kubelet[2299]: E0710 01:38:39.838372 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:39.838538 kubelet[2299]: E0710 01:38:39.838483 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:39.838608 kubelet[2299]: E0710 01:38:39.838589 2299 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": Get \"https://139.178.70.102:6443/api/v1/nodes/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:39.838608 kubelet[2299]: E0710 01:38:39.838604 2299 kubelet_node_status.go:522] "Unable to update node status" err="update node status exceeds retry count" Jul 10 01:38:40.495440 kubelet[2299]: I0710 01:38:40.495401 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.495665 kubelet[2299]: I0710 01:38:40.495541 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.496236 kubelet[2299]: E0710 01:38:40.496204 2299 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tigera-operator,Image:quay.io/tigera/operator:v1.38.3,Command:[operator],Args:[-manage-crds=true],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:tigera-operator,ValueFrom:nil,},EnvVar{Name:TIGERA_OPERATOR_INIT_IMAGE_VERSION,Value:v1.38.3,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-lib-calico,ReadOnly:true,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mj7k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tigera-operator-5bf8dfcb4-twgs2_tigera-operator(9c135a1b-00bf-4e6f-87fa-9ac292c6a135): CreateContainerConfigError: failed to sync configmap cache: timed out waiting for the condition" logger="UnhandledError" Jul 10 01:38:40.496453 kubelet[2299]: I0710 01:38:40.496434 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.497097 kubelet[2299]: I0710 01:38:40.497066 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.497317 kubelet[2299]: E0710 01:38:40.497295 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CreateContainerConfigError: \"failed to sync configmap cache: timed out waiting for the condition\"" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" Jul 10 01:38:40.497388 kubelet[2299]: I0710 01:38:40.497366 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.497557 kubelet[2299]: I0710 01:38:40.497537 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.497719 kubelet[2299]: I0710 01:38:40.497700 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.497854 kubelet[2299]: I0710 01:38:40.497836 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.497992 kubelet[2299]: I0710 01:38:40.497974 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.498125 kubelet[2299]: I0710 01:38:40.498106 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.498259 kubelet[2299]: I0710 01:38:40.498239 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.498389 kubelet[2299]: I0710 01:38:40.498371 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.498521 kubelet[2299]: I0710 01:38:40.498503 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:40.619015 kubelet[2299]: W0710 01:38:40.618969 2299 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: Get "https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687": dial tcp 139.178.70.102:6443: connect: connection refused Jul 10 01:38:40.619176 kubelet[2299]: E0710 01:38:40.619159 2299 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dkube-proxy&resourceVersion=687\": dial tcp 139.178.70.102:6443: connect: connection refused" logger="UnhandledError" Jul 10 01:38:41.625795 kubelet[2299]: E0710 01:38:41.625769 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.626122 kubelet[2299]: E0710 01:38:41.626112 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.626096593 +0000 UTC m=+1557.494523476 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.727023 kubelet[2299]: E0710 01:38:41.726993 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.727222 kubelet[2299]: E0710 01:38:41.727209 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.727193934 +0000 UTC m=+1557.595620822 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "goldmane-ca-bundle" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.800355 kubelet[2299]: E0710 01:38:41.800321 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="7s" Jul 10 01:38:41.827831 kubelet[2299]: E0710 01:38:41.827811 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.827928 kubelet[2299]: E0710 01:38:41.827910 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.827983 kubelet[2299]: E0710 01:38:41.827955 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.827942035 +0000 UTC m=+1557.696368918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.827983 kubelet[2299]: E0710 01:38:41.827973 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828086 kubelet[2299]: E0710 01:38:41.827990 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.827983816 +0000 UTC m=+1557.696410700 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828086 kubelet[2299]: E0710 01:38:41.828002 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828086 kubelet[2299]: E0710 01:38:41.828018 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828012965 +0000 UTC m=+1557.696439849 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828086 kubelet[2299]: E0710 01:38:41.828028 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828086 kubelet[2299]: E0710 01:38:41.828045 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.82803964 +0000 UTC m=+1557.696466523 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.827821 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828134 2299 projected.go:194] Error preparing data for projected volume kube-api-access-wpcvh for pod kube-system/kube-proxy-rxvps: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828154 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828148187 +0000 UTC m=+1557.696575070 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-wpcvh" (UniqueName: "kubernetes.io/projected/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-api-access-wpcvh") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.827830 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828166 2299 projected.go:194] Error preparing data for projected volume kube-api-access-r5vvj for pod calico-apiserver/calico-apiserver-6d44674bc4-w2f48: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828180 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828175603 +0000 UTC m=+1557.696602486 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-r5vvj" (UniqueName: "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.827838 2299 secret.go:189] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828200 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828195082 +0000 UTC m=+1557.696621965 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/bb9848ea-740a-453f-b511-e75cc1983690-typha-certs") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.827843 2299 secret.go:189] Couldn't get secret calico-system/goldmane-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828219 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828214679 +0000 UTC m=+1557.696641562 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "goldmane-key-pair" (UniqueName: "kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.827849 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828229 2299 projected.go:194] Error preparing data for projected volume kube-api-access-47zqf for pod calico-apiserver/calico-apiserver-6d44674bc4-b2wqb: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828246 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828239364 +0000 UTC m=+1557.696666247 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-47zqf" (UniqueName: "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.827855 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.828267 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle podName:bb9848ea-740a-453f-b511-e75cc1983690 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828262261 +0000 UTC m=+1557.696689145 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/bb9848ea-740a-453f-b511-e75cc1983690-tigera-ca-bundle") pod "calico-typha-66ddcf689b-z7vqm" (UID: "bb9848ea-740a-453f-b511-e75cc1983690") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828298 kubelet[2299]: E0710 01:38:41.827862 2299 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828288 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828283104 +0000 UTC m=+1557.696709987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.827868 2299 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828308 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828303914 +0000 UTC m=+1557.696730797 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.827880 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828319 2299 projected.go:194] Error preparing data for projected volume kube-api-access-4bl2z for pod kube-system/coredns-7c65d6cfc9-4k5ld: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828334 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828329154 +0000 UTC m=+1557.696756042 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4bl2z" (UniqueName: "kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.827887 2299 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828355 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828349902 +0000 UTC m=+1557.696776785 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/6367e512-6f46-407d-94e1-a5c573185269-tigera-ca-bundle") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.827892 2299 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828373 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs podName:6367e512-6f46-407d-94e1-a5c573185269 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828368941 +0000 UTC m=+1557.696795825 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/6367e512-6f46-407d-94e1-a5c573185269-node-certs") pod "calico-node-2k6z4" (UID: "6367e512-6f46-407d-94e1-a5c573185269") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.827898 2299 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828384 2299 projected.go:194] Error preparing data for projected volume kube-api-access-pwvqb for pod kube-system/coredns-7c65d6cfc9-snhl5: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828398 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828393335 +0000 UTC m=+1557.696820219 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-pwvqb" (UniqueName: "kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.827904 2299 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.828887 kubelet[2299]: E0710 01:38:41.828417 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy podName:22eb6a01-1430-4380-b1df-6cb2ed0c8d8b nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828411867 +0000 UTC m=+1557.696838750 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/22eb6a01-1430-4380-b1df-6cb2ed0c8d8b-kube-proxy") pod "kube-proxy-rxvps" (UID: "22eb6a01-1430-4380-b1df-6cb2ed0c8d8b") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:41.829418 kubelet[2299]: E0710 01:38:41.828449 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle podName:5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc nodeName:}" failed. No retries permitted until 2025-07-10 01:38:57.828442804 +0000 UTC m=+1557.696869687 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle") pod "calico-kube-controllers-5477ff879d-j2p5q" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:42.791128 kubelet[2299]: E0710 01:38:42.791045 2299 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/events\": dial tcp 139.178.70.102:6443: connect: connection refused" event="&Event{ObjectMeta:{tigera-operator-5bf8dfcb4-twgs2.1850c020102e5a9f tigera-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:tigera-operator,Name:tigera-operator-5bf8dfcb4-twgs2,UID:9c135a1b-00bf-4e6f-87fa-9ac292c6a135,APIVersion:v1,ResourceVersion:382,FieldPath:spec.containers{tigera-operator},},Reason:Pulled,Message:Container image \"quay.io/tigera/operator:v1.38.3\" already present on machine,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,LastTimestamp:2025-07-10 01:38:18.990082719 +0000 UTC m=+1518.858509606,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 10 01:38:43.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-139.178.70.102:22-139.178.68.195:38264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:43.061784 systemd[1]: Started sshd@34-139.178.70.102:22-139.178.68.195:38264.service. Jul 10 01:38:43.065740 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 01:38:43.065767 kernel: audit: type=1130 audit(1752111523.060:704): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-139.178.70.102:22-139.178.68.195:38264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:43.101074 sshd[8980]: Accepted publickey for core from 139.178.68.195 port 38264 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:43.099000 audit[8980]: USER_ACCT pid=8980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.104662 kernel: audit: type=1101 audit(1752111523.099:705): pid=8980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.104894 sshd[8980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:43.103000 audit[8980]: CRED_ACQ pid=8980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.110587 kernel: audit: type=1103 audit(1752111523.103:706): pid=8980 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.110625 kernel: audit: type=1006 audit(1752111523.103:707): pid=8980 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=36 res=1 Jul 10 01:38:43.103000 audit[8980]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff15d1b730 a2=3 a3=0 items=0 ppid=1 pid=8980 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:43.114656 kernel: audit: type=1300 audit(1752111523.103:707): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff15d1b730 a2=3 a3=0 items=0 ppid=1 pid=8980 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=36 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:43.115427 systemd-logind[1351]: New session 36 of user core. Jul 10 01:38:43.103000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:43.115873 systemd[1]: Started session-36.scope. Jul 10 01:38:43.117658 kernel: audit: type=1327 audit(1752111523.103:707): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:43.118000 audit[8980]: USER_START pid=8980 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.123906 kernel: audit: type=1105 audit(1752111523.118:708): pid=8980 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.123938 kernel: audit: type=1103 audit(1752111523.122:709): pid=8983 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.122000 audit[8983]: CRED_ACQ pid=8983 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.205846 sshd[8980]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:43.204000 audit[8980]: USER_END pid=8980 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.210439 systemd[1]: sshd@34-139.178.70.102:22-139.178.68.195:38264.service: Deactivated successfully. Jul 10 01:38:43.210652 kernel: audit: type=1106 audit(1752111523.204:710): pid=8980 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.210938 systemd[1]: session-36.scope: Deactivated successfully. Jul 10 01:38:43.204000 audit[8980]: CRED_DISP pid=8980 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.211462 systemd-logind[1351]: Session 36 logged out. Waiting for processes to exit. Jul 10 01:38:43.209000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@34-139.178.70.102:22-139.178.68.195:38264 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:43.214685 kernel: audit: type=1104 audit(1752111523.204:711): pid=8980 uid=0 auid=500 ses=36 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:43.214541 systemd-logind[1351]: Removed session 36. Jul 10 01:38:45.495621 env[1363]: time="2025-07-10T01:38:45.494749910Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:38:45.495621 env[1363]: time="2025-07-10T01:38:45.494847530Z" level=info msg="Container to stop \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:45.495621 env[1363]: time="2025-07-10T01:38:45.495249314Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:38:45.495621 env[1363]: time="2025-07-10T01:38:45.495286770Z" level=info msg="Container to stop \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.494835 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.494974 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.495119 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.495230 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.495337 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.495504 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.495680 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.495846 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.495967 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496146 kubelet[2299]: I0710 01:38:45.496078 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496688 kubelet[2299]: I0710 01:38:45.496187 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496688 kubelet[2299]: I0710 01:38:45.496294 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496688 kubelet[2299]: I0710 01:38:45.496399 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496688 kubelet[2299]: I0710 01:38:45.496524 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496688 kubelet[2299]: I0710 01:38:45.496632 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496826 kubelet[2299]: I0710 01:38:45.496750 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.496882 kubelet[2299]: I0710 01:38:45.496861 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497004 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497139 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497256 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497367 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497490 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497596 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497720 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497828 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.497974 kubelet[2299]: I0710 01:38:45.497946 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:45.523531 env[1363]: time="2025-07-10T01:38:45.523493530Z" level=error msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" failed" error="failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:45.523835 kubelet[2299]: E0710 01:38:45.523722 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:38:45.523835 kubelet[2299]: E0710 01:38:45.523761 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3"} Jul 10 01:38:45.523835 kubelet[2299]: E0710 01:38:45.523808 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:45.524912 kubelet[2299]: E0710 01:38:45.524890 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e8146e9-6407-49b7-8cef-e26dac385734\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" Jul 10 01:38:45.526183 env[1363]: time="2025-07-10T01:38:45.526161458Z" level=error msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" failed" error="failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:45.526265 kubelet[2299]: E0710 01:38:45.526240 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:45.526321 kubelet[2299]: E0710 01:38:45.526268 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742"} Jul 10 01:38:45.526440 env[1363]: time="2025-07-10T01:38:45.526424695Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:38:45.546506 env[1363]: time="2025-07-10T01:38:45.546468812Z" level=error msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" failed" error="failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:45.546631 kubelet[2299]: E0710 01:38:45.546605 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:45.546712 kubelet[2299]: E0710 01:38:45.546652 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb"} Jul 10 01:38:45.546712 kubelet[2299]: E0710 01:38:45.546683 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:45.547832 kubelet[2299]: E0710 01:38:45.547811 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/goldmane-58fd7646b9-zxwst" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" Jul 10 01:38:46.569896 env[1363]: time="2025-07-10T01:38:46.569840854Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:38:46.570218 env[1363]: time="2025-07-10T01:38:46.569903276Z" level=info msg="Container to stop \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:46.570417 kubelet[2299]: I0710 01:38:46.570390 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.570835 kubelet[2299]: I0710 01:38:46.570817 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.571038 kubelet[2299]: I0710 01:38:46.571021 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.571222 kubelet[2299]: I0710 01:38:46.571205 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.571473 kubelet[2299]: I0710 01:38:46.571438 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.571818 kubelet[2299]: I0710 01:38:46.571800 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.572031 kubelet[2299]: I0710 01:38:46.572013 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.572232 kubelet[2299]: I0710 01:38:46.572210 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.572434 kubelet[2299]: I0710 01:38:46.572417 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.572623 kubelet[2299]: I0710 01:38:46.572606 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.573214 kubelet[2299]: I0710 01:38:46.573196 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.573472 kubelet[2299]: I0710 01:38:46.573446 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.573712 kubelet[2299]: I0710 01:38:46.573695 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:46.597236 env[1363]: time="2025-07-10T01:38:46.597191450Z" level=error msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" failed" error="failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:46.597489 kubelet[2299]: E0710 01:38:46.597387 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:46.597489 kubelet[2299]: E0710 01:38:46.597421 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742"} Jul 10 01:38:46.597764 env[1363]: time="2025-07-10T01:38:46.597716360Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:38:46.614469 env[1363]: time="2025-07-10T01:38:46.614437569Z" level=error msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" failed" error="failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:46.614727 kubelet[2299]: E0710 01:38:46.614684 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:46.614809 kubelet[2299]: E0710 01:38:46.614736 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb"} Jul 10 01:38:46.614809 kubelet[2299]: E0710 01:38:46.614773 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:46.615977 kubelet[2299]: E0710 01:38:46.615953 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/goldmane-58fd7646b9-zxwst" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" Jul 10 01:38:47.494813 kubelet[2299]: I0710 01:38:47.494787 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.494943 env[1363]: time="2025-07-10T01:38:47.494907375Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:38:47.494992 env[1363]: time="2025-07-10T01:38:47.494963859Z" level=info msg="Container to stop \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:47.495229 kubelet[2299]: I0710 01:38:47.495209 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.495351 kubelet[2299]: I0710 01:38:47.495335 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.495463 kubelet[2299]: I0710 01:38:47.495447 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.495572 kubelet[2299]: I0710 01:38:47.495557 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.495689 kubelet[2299]: I0710 01:38:47.495674 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.495795 kubelet[2299]: I0710 01:38:47.495780 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.495900 kubelet[2299]: I0710 01:38:47.495886 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.496005 kubelet[2299]: I0710 01:38:47.495990 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.496108 kubelet[2299]: I0710 01:38:47.496094 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.496211 kubelet[2299]: I0710 01:38:47.496197 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.496314 kubelet[2299]: I0710 01:38:47.496300 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.496426 kubelet[2299]: I0710 01:38:47.496412 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:47.518861 env[1363]: time="2025-07-10T01:38:47.518816970Z" level=error msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" failed" error="failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:47.519010 kubelet[2299]: E0710 01:38:47.518958 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:47.519079 kubelet[2299]: E0710 01:38:47.519020 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714"} Jul 10 01:38:47.519079 kubelet[2299]: E0710 01:38:47.519063 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:47.520167 kubelet[2299]: E0710 01:38:47.520146 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3459c244-a1ae-43bc-ad86-239a6e665757\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-snhl5" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" Jul 10 01:38:48.213651 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 01:38:48.213720 kernel: audit: type=1130 audit(1752111528.207:713): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-139.178.70.102:22-139.178.68.195:47916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:48.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-139.178.70.102:22-139.178.68.195:47916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:48.209067 systemd[1]: Started sshd@35-139.178.70.102:22-139.178.68.195:47916.service. Jul 10 01:38:48.246000 audit[9084]: USER_ACCT pid=9084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.251127 sshd[9084]: Accepted publickey for core from 139.178.68.195 port 47916 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:48.249000 audit[9084]: CRED_ACQ pid=9084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.251326 sshd[9084]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:48.254331 kernel: audit: type=1101 audit(1752111528.246:714): pid=9084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.254357 kernel: audit: type=1103 audit(1752111528.249:715): pid=9084 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.254399 kernel: audit: type=1006 audit(1752111528.249:716): pid=9084 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=37 res=1 Jul 10 01:38:48.249000 audit[9084]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef3e51800 a2=3 a3=0 items=0 ppid=1 pid=9084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:48.259521 kernel: audit: type=1300 audit(1752111528.249:716): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffef3e51800 a2=3 a3=0 items=0 ppid=1 pid=9084 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=37 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:48.249000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:48.260664 kernel: audit: type=1327 audit(1752111528.249:716): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:48.261528 systemd-logind[1351]: New session 37 of user core. Jul 10 01:38:48.262106 systemd[1]: Started session-37.scope. Jul 10 01:38:48.263000 audit[9084]: USER_START pid=9084 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.267000 audit[9087]: CRED_ACQ pid=9087 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.271868 kernel: audit: type=1105 audit(1752111528.263:717): pid=9084 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.271901 kernel: audit: type=1103 audit(1752111528.267:718): pid=9087 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.354763 sshd[9084]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:48.353000 audit[9084]: USER_END pid=9084 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.362032 kernel: audit: type=1106 audit(1752111528.353:719): pid=9084 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.362061 kernel: audit: type=1104 audit(1752111528.355:720): pid=9084 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.355000 audit[9084]: CRED_DISP pid=9084 uid=0 auid=500 ses=37 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:48.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@35-139.178.70.102:22-139.178.68.195:47916 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:48.362387 systemd[1]: sshd@35-139.178.70.102:22-139.178.68.195:47916.service: Deactivated successfully. Jul 10 01:38:48.362896 systemd[1]: session-37.scope: Deactivated successfully. Jul 10 01:38:48.363337 systemd-logind[1351]: Session 37 logged out. Waiting for processes to exit. Jul 10 01:38:48.363865 systemd-logind[1351]: Removed session 37. Jul 10 01:38:48.495875 kubelet[2299]: I0710 01:38:48.494908 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.495875 kubelet[2299]: I0710 01:38:48.495068 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.495875 kubelet[2299]: I0710 01:38:48.495218 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.496238 env[1363]: time="2025-07-10T01:38:48.496070307Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:38:48.496238 env[1363]: time="2025-07-10T01:38:48.496121011Z" level=info msg="Container to stop \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:48.497003 kubelet[2299]: I0710 01:38:48.496601 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497003 kubelet[2299]: I0710 01:38:48.496787 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497003 kubelet[2299]: I0710 01:38:48.496905 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497117 kubelet[2299]: I0710 01:38:48.497063 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497255 kubelet[2299]: I0710 01:38:48.497232 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497412 kubelet[2299]: I0710 01:38:48.497393 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497562 kubelet[2299]: I0710 01:38:48.497537 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497691 env[1363]: time="2025-07-10T01:38:48.497670589Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:38:48.497746 kubelet[2299]: I0710 01:38:48.497688 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.497851 env[1363]: time="2025-07-10T01:38:48.497832201Z" level=info msg="Container to stop \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:48.498048 kubelet[2299]: I0710 01:38:48.498027 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.498215 kubelet[2299]: I0710 01:38:48.498194 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499044 kubelet[2299]: I0710 01:38:48.498409 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499044 kubelet[2299]: I0710 01:38:48.498556 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499044 kubelet[2299]: I0710 01:38:48.498702 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499044 kubelet[2299]: I0710 01:38:48.498820 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499044 kubelet[2299]: I0710 01:38:48.498941 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499226 kubelet[2299]: I0710 01:38:48.499097 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499290 kubelet[2299]: I0710 01:38:48.499268 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499461 kubelet[2299]: I0710 01:38:48.499441 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.499666 kubelet[2299]: I0710 01:38:48.499622 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.500604 kubelet[2299]: I0710 01:38:48.500572 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.500820 kubelet[2299]: I0710 01:38:48.500788 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.500967 kubelet[2299]: I0710 01:38:48.500946 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.501131 kubelet[2299]: I0710 01:38:48.501098 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:48.526484 env[1363]: time="2025-07-10T01:38:48.526439918Z" level=error msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" failed" error="failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:48.526657 kubelet[2299]: E0710 01:38:48.526609 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:38:48.526747 kubelet[2299]: E0710 01:38:48.526678 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b"} Jul 10 01:38:48.526747 kubelet[2299]: E0710 01:38:48.526732 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:48.527954 kubelet[2299]: E0710 01:38:48.527922 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" Jul 10 01:38:48.533520 env[1363]: time="2025-07-10T01:38:48.533488234Z" level=error msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" failed" error="failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:48.533706 kubelet[2299]: E0710 01:38:48.533684 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:38:48.533778 kubelet[2299]: E0710 01:38:48.533721 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856"} Jul 10 01:38:48.533778 kubelet[2299]: E0710 01:38:48.533749 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:48.534956 kubelet[2299]: E0710 01:38:48.534933 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" Jul 10 01:38:48.801864 kubelet[2299]: E0710 01:38:48.801841 2299 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.102:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.102:6443: connect: connection refused" interval="7s" Jul 10 01:38:48.944056 env[1363]: time="2025-07-10T01:38:48.944014346Z" level=info msg="Kill container \"2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f\"" Jul 10 01:38:48.983476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f-rootfs.mount: Deactivated successfully. Jul 10 01:38:48.984370 env[1363]: time="2025-07-10T01:38:48.984343344Z" level=info msg="shim disconnected" id=2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f Jul 10 01:38:48.984447 env[1363]: time="2025-07-10T01:38:48.984435051Z" level=warning msg="cleaning up after shim disconnected" id=2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f namespace=k8s.io Jul 10 01:38:48.984493 env[1363]: time="2025-07-10T01:38:48.984483230Z" level=info msg="cleaning up dead shim" Jul 10 01:38:48.990957 env[1363]: time="2025-07-10T01:38:48.990929996Z" level=warning msg="cleanup warnings time=\"2025-07-10T01:38:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=9148 runtime=io.containerd.runc.v2\n" Jul 10 01:38:49.004813 env[1363]: time="2025-07-10T01:38:49.004784470Z" level=info msg="StopContainer for \"2426c34da3c56c7c197e36edfc96763e7adc7f0e476d41bf1372bb6d05be576f\" returns successfully" Jul 10 01:38:49.006406 env[1363]: time="2025-07-10T01:38:49.006388917Z" level=info msg="CreateContainer within sandbox \"7da47a2c0d73a548c7135430d1b2863a42eda18a2dd2186d7dea2361b48b603b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Jul 10 01:38:49.016932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount995513038.mount: Deactivated successfully. Jul 10 01:38:49.024236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2634219350.mount: Deactivated successfully. Jul 10 01:38:49.026017 env[1363]: time="2025-07-10T01:38:49.025998603Z" level=info msg="CreateContainer within sandbox \"7da47a2c0d73a548c7135430d1b2863a42eda18a2dd2186d7dea2361b48b603b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"98637a901da0a03ba0530310c02c10d6385bb5df65337d5b743996ed9b621df2\"" Jul 10 01:38:49.026283 env[1363]: time="2025-07-10T01:38:49.026270672Z" level=info msg="StartContainer for \"98637a901da0a03ba0530310c02c10d6385bb5df65337d5b743996ed9b621df2\"" Jul 10 01:38:49.067631 env[1363]: time="2025-07-10T01:38:49.067577673Z" level=info msg="StartContainer for \"98637a901da0a03ba0530310c02c10d6385bb5df65337d5b743996ed9b621df2\" returns successfully" Jul 10 01:38:49.332718 kubelet[2299]: I0710 01:38:49.332637 2299 status_manager.go:851] "Failed to get status for pod" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/whisker-5bc4d9bd7d-nwwj6\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.333022 kubelet[2299]: I0710 01:38:49.333005 2299 status_manager.go:851] "Failed to get status for pod" podUID="b35b56493416c25588cb530e37ffc065" pod="kube-system/kube-scheduler-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.333247 kubelet[2299]: I0710 01:38:49.333222 2299 status_manager.go:851] "Failed to get status for pod" podUID="3f04709fe51ae4ab5abd58e8da771b74" pod="kube-system/kube-controller-manager-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.333467 kubelet[2299]: I0710 01:38:49.333451 2299 status_manager.go:851] "Failed to get status for pod" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" pod="calico-apiserver/calico-apiserver-6d44674bc4-b2wqb" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-b2wqb\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.333684 kubelet[2299]: I0710 01:38:49.333668 2299 status_manager.go:851] "Failed to get status for pod" podUID="9c135a1b-00bf-4e6f-87fa-9ac292c6a135" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgs2" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-5bf8dfcb4-twgs2\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.333896 kubelet[2299]: I0710 01:38:49.333881 2299 status_manager.go:851] "Failed to get status for pod" podUID="bb9848ea-740a-453f-b511-e75cc1983690" pod="calico-system/calico-typha-66ddcf689b-z7vqm" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-typha-66ddcf689b-z7vqm\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.334101 kubelet[2299]: I0710 01:38:49.334086 2299 status_manager.go:851] "Failed to get status for pod" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" pod="calico-system/calico-kube-controllers-5477ff879d-j2p5q" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-5477ff879d-j2p5q\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.334303 kubelet[2299]: I0710 01:38:49.334287 2299 status_manager.go:851] "Failed to get status for pod" podUID="6367e512-6f46-407d-94e1-a5c573185269" pod="calico-system/calico-node-2k6z4" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/calico-node-2k6z4\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.334501 kubelet[2299]: I0710 01:38:49.334486 2299 status_manager.go:851] "Failed to get status for pod" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" pod="kube-system/coredns-7c65d6cfc9-4k5ld" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4k5ld\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.334716 kubelet[2299]: I0710 01:38:49.334699 2299 status_manager.go:851] "Failed to get status for pod" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" pod="kube-system/coredns-7c65d6cfc9-snhl5" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-snhl5\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.334919 kubelet[2299]: I0710 01:38:49.334902 2299 status_manager.go:851] "Failed to get status for pod" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" pod="calico-system/goldmane-58fd7646b9-zxwst" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-system/pods/goldmane-58fd7646b9-zxwst\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.335115 kubelet[2299]: I0710 01:38:49.335100 2299 status_manager.go:851] "Failed to get status for pod" podUID="8acd60714a0f0f6f5e038fa659db2909" pod="kube-system/kube-apiserver-localhost" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-localhost\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.335331 kubelet[2299]: I0710 01:38:49.335314 2299 status_manager.go:851] "Failed to get status for pod" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" pod="calico-apiserver/calico-apiserver-6d44674bc4-w2f48" err="Get \"https://139.178.70.102:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-6d44674bc4-w2f48\": dial tcp 139.178.70.102:6443: connect: connection refused" Jul 10 01:38:49.494847 env[1363]: time="2025-07-10T01:38:49.494789039Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:38:49.494944 env[1363]: time="2025-07-10T01:38:49.494849433Z" level=info msg="Container to stop \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:49.494944 env[1363]: time="2025-07-10T01:38:49.494862686Z" level=info msg="Container to stop \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:49.511889 env[1363]: time="2025-07-10T01:38:49.511852435Z" level=error msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" failed" error="failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:49.512289 kubelet[2299]: E0710 01:38:49.512208 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:38:49.512289 kubelet[2299]: E0710 01:38:49.512239 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff"} Jul 10 01:38:49.512289 kubelet[2299]: E0710 01:38:49.512267 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:49.513377 kubelet[2299]: E0710 01:38:49.513348 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="calico-system/whisker-5bc4d9bd7d-nwwj6" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" Jul 10 01:38:50.217158 kubelet[2299]: W0710 01:38:50.217131 2299 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'localhost' and this object Jul 10 01:38:50.217308 kubelet[2299]: E0710 01:38:50.217291 2299 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:localhost\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" Jul 10 01:38:50.513477 env[1363]: time="2025-07-10T01:38:50.513223045Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:38:50.513477 env[1363]: time="2025-07-10T01:38:50.513275970Z" level=info msg="Container to stop \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:50.561153 env[1363]: time="2025-07-10T01:38:50.561118943Z" level=error msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" failed" error="failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" Jul 10 01:38:50.561421 kubelet[2299]: E0710 01:38:50.561385 2299 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\": plugin type=\"calico\" failed (delete): error getting ClusterInformation: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\": dial tcp 10.96.0.1:443: connect: connection refused" podSandboxID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:38:50.561666 kubelet[2299]: E0710 01:38:50.561422 2299 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319"} Jul 10 01:38:50.561666 kubelet[2299]: E0710 01:38:50.561455 2299 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" logger="UnhandledError" Jul 10 01:38:50.562568 kubelet[2299]: E0710 01:38:50.562544 2299 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\\\": plugin type=\\\"calico\\\" failed (delete): error getting ClusterInformation: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default\\\": dial tcp 10.96.0.1:443: connect: connection refused\"" pod="kube-system/coredns-7c65d6cfc9-4k5ld" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" Jul 10 01:38:52.997000 audit[9225]: NETFILTER_CFG table=filter:165 family=2 entries=11 op=nft_register_rule pid=9225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:38:52.997000 audit[9225]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7fffdd2c9380 a2=0 a3=7fffdd2c936c items=0 ppid=2398 pid=9225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:52.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:38:53.002000 audit[9225]: NETFILTER_CFG table=nat:166 family=2 entries=29 op=nft_register_chain pid=9225 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:38:53.002000 audit[9225]: SYSCALL arch=c000003e syscall=46 success=yes exit=10116 a0=3 a1=7fffdd2c9380 a2=0 a3=7fffdd2c936c items=0 ppid=2398 pid=9225 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:53.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:38:53.363179 kernel: kauditd_printk_skb: 7 callbacks suppressed Jul 10 01:38:53.363269 kernel: audit: type=1130 audit(1752111533.356:724): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-139.178.70.102:22-139.178.68.195:47930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:53.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-139.178.70.102:22-139.178.68.195:47930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:53.357709 systemd[1]: Started sshd@36-139.178.70.102:22-139.178.68.195:47930.service. Jul 10 01:38:53.396000 audit[9231]: USER_ACCT pid=9231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.398392 sshd[9231]: Accepted publickey for core from 139.178.68.195 port 47930 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:53.402667 kernel: audit: type=1101 audit(1752111533.396:725): pid=9231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.401000 audit[9231]: CRED_ACQ pid=9231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.403881 sshd[9231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:53.410300 kernel: audit: type=1103 audit(1752111533.401:726): pid=9231 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.410336 kernel: audit: type=1006 audit(1752111533.402:727): pid=9231 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=38 res=1 Jul 10 01:38:53.410375 kernel: audit: type=1300 audit(1752111533.402:727): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb493ee10 a2=3 a3=0 items=0 ppid=1 pid=9231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:53.402000 audit[9231]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb493ee10 a2=3 a3=0 items=0 ppid=1 pid=9231 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=38 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:53.402000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:53.416657 kernel: audit: type=1327 audit(1752111533.402:727): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:53.418579 systemd[1]: Started session-38.scope. Jul 10 01:38:53.418909 systemd-logind[1351]: New session 38 of user core. Jul 10 01:38:53.420000 audit[9231]: USER_START pid=9231 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.421000 audit[9234]: CRED_ACQ pid=9234 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.430224 kernel: audit: type=1105 audit(1752111533.420:728): pid=9231 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.430253 kernel: audit: type=1103 audit(1752111533.421:729): pid=9234 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.495155 kubelet[2299]: I0710 01:38:53.494949 2299 scope.go:117] "RemoveContainer" containerID="1d85ec74d241860eeadf05dad7e3fcac3b836bb5b8e411f5de5ce4e21f282532" Jul 10 01:38:53.498456 env[1363]: time="2025-07-10T01:38:53.498427746Z" level=info msg="CreateContainer within sandbox \"01443d9289a0bbe23feae26cd6280fa2fd433168d943a8fed752302c7264f2ab\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jul 10 01:38:53.507733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1038577947.mount: Deactivated successfully. Jul 10 01:38:53.511335 env[1363]: time="2025-07-10T01:38:53.511217381Z" level=info msg="CreateContainer within sandbox \"01443d9289a0bbe23feae26cd6280fa2fd433168d943a8fed752302c7264f2ab\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"6b949e60615bbb747c688f6ac3784603799a0d29d1ec74a219b0379cba4917b8\"" Jul 10 01:38:53.512837 env[1363]: time="2025-07-10T01:38:53.512582055Z" level=info msg="StartContainer for \"6b949e60615bbb747c688f6ac3784603799a0d29d1ec74a219b0379cba4917b8\"" Jul 10 01:38:53.549139 env[1363]: time="2025-07-10T01:38:53.549115625Z" level=info msg="StartContainer for \"6b949e60615bbb747c688f6ac3784603799a0d29d1ec74a219b0379cba4917b8\" returns successfully" Jul 10 01:38:53.559649 sshd[9231]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:53.560381 systemd[1]: Started sshd@37-139.178.70.102:22-139.178.68.195:47944.service. Jul 10 01:38:53.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-139.178.70.102:22-139.178.68.195:47944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:53.565660 kernel: audit: type=1130 audit(1752111533.558:730): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-139.178.70.102:22-139.178.68.195:47944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:53.565000 audit[9231]: USER_END pid=9231 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.567725 systemd[1]: sshd@36-139.178.70.102:22-139.178.68.195:47930.service: Deactivated successfully. Jul 10 01:38:53.568235 systemd[1]: session-38.scope: Deactivated successfully. Jul 10 01:38:53.569085 systemd-logind[1351]: Session 38 logged out. Waiting for processes to exit. Jul 10 01:38:53.569694 systemd-logind[1351]: Removed session 38. Jul 10 01:38:53.571667 kernel: audit: type=1106 audit(1752111533.565:731): pid=9231 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.565000 audit[9231]: CRED_DISP pid=9231 uid=0 auid=500 ses=38 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@36-139.178.70.102:22-139.178.68.195:47930 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:53.603000 audit[9271]: USER_ACCT pid=9271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.604932 sshd[9271]: Accepted publickey for core from 139.178.68.195 port 47944 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:53.604000 audit[9271]: CRED_ACQ pid=9271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.604000 audit[9271]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fffb25b5940 a2=3 a3=0 items=0 ppid=1 pid=9271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=39 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:53.604000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:53.606294 sshd[9271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:53.610088 systemd[1]: Started session-39.scope. Jul 10 01:38:53.610323 systemd-logind[1351]: New session 39 of user core. Jul 10 01:38:53.614000 audit[9271]: USER_START pid=9271 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:53.615000 audit[9276]: CRED_ACQ pid=9276 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.521754 sshd[9271]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:54.523268 systemd[1]: Started sshd@38-139.178.70.102:22-139.178.68.195:47958.service. Jul 10 01:38:54.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-139.178.70.102:22-139.178.68.195:47958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:54.522000 audit[9271]: USER_END pid=9271 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.522000 audit[9271]: CRED_DISP pid=9271 uid=0 auid=500 ses=39 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.526833 systemd[1]: sshd@37-139.178.70.102:22-139.178.68.195:47944.service: Deactivated successfully. Jul 10 01:38:54.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@37-139.178.70.102:22-139.178.68.195:47944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:54.527304 systemd[1]: session-39.scope: Deactivated successfully. Jul 10 01:38:54.530885 systemd-logind[1351]: Session 39 logged out. Waiting for processes to exit. Jul 10 01:38:54.531337 systemd-logind[1351]: Removed session 39. Jul 10 01:38:54.561000 audit[9305]: USER_ACCT pid=9305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.563252 sshd[9305]: Accepted publickey for core from 139.178.68.195 port 47958 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:54.562000 audit[9305]: CRED_ACQ pid=9305 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.562000 audit[9305]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc6979b980 a2=3 a3=0 items=0 ppid=1 pid=9305 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=40 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:54.562000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:54.564232 sshd[9305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:54.567028 systemd[1]: Started session-40.scope. Jul 10 01:38:54.567143 systemd-logind[1351]: New session 40 of user core. Jul 10 01:38:54.568000 audit[9305]: USER_START pid=9305 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.569000 audit[9310]: CRED_ACQ pid=9310 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.666283 sshd[9305]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:54.665000 audit[9305]: USER_END pid=9305 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.665000 audit[9305]: CRED_DISP pid=9305 uid=0 auid=500 ses=40 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:54.667868 systemd[1]: sshd@38-139.178.70.102:22-139.178.68.195:47958.service: Deactivated successfully. Jul 10 01:38:54.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@38-139.178.70.102:22-139.178.68.195:47958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:54.668582 systemd[1]: session-40.scope: Deactivated successfully. Jul 10 01:38:54.668801 systemd-logind[1351]: Session 40 logged out. Waiting for processes to exit. Jul 10 01:38:54.669251 systemd-logind[1351]: Removed session 40. Jul 10 01:38:55.050000 audit[9365]: AVC avc: denied { write } for pid=9365 comm="tee" name="fd" dev="proc" ino=115192 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:38:55.051000 audit[9363]: AVC avc: denied { write } for pid=9363 comm="tee" name="fd" dev="proc" ino=115195 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:38:55.050000 audit[9365]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fff0c5807ef a2=241 a3=1b6 items=1 ppid=9331 pid=9365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:55.050000 audit: CWD cwd="/etc/service/enabled/cni/log" Jul 10 01:38:55.050000 audit: PATH item=0 name="/dev/fd/63" inode=115175 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:38:55.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:38:55.051000 audit[9363]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffce170e7de a2=241 a3=1b6 items=1 ppid=9328 pid=9363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:55.051000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jul 10 01:38:55.051000 audit: PATH item=0 name="/dev/fd/63" inode=115174 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:38:55.051000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:38:55.054000 audit[9369]: AVC avc: denied { write } for pid=9369 comm="tee" name="fd" dev="proc" ino=115200 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:38:55.054000 audit[9369]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffc39ec7dd a2=241 a3=1b6 items=1 ppid=9330 pid=9369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:55.054000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jul 10 01:38:55.054000 audit: PATH item=0 name="/dev/fd/63" inode=115184 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:38:55.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:38:55.058000 audit[9377]: AVC avc: denied { write } for pid=9377 comm="tee" name="fd" dev="proc" ino=115206 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:38:55.058000 audit[9377]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffcf0be67ed a2=241 a3=1b6 items=1 ppid=9334 pid=9377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:55.058000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jul 10 01:38:55.058000 audit: PATH item=0 name="/dev/fd/63" inode=115189 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:38:55.058000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:38:55.072000 audit[9387]: AVC avc: denied { write } for pid=9387 comm="tee" name="fd" dev="proc" ino=115216 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:38:55.072000 audit[9387]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffecc4147ed a2=241 a3=1b6 items=1 ppid=9347 pid=9387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:55.072000 audit: CWD cwd="/etc/service/enabled/felix/log" Jul 10 01:38:55.072000 audit: PATH item=0 name="/dev/fd/63" inode=115202 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:38:55.072000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:38:55.089000 audit[9394]: AVC avc: denied { write } for pid=9394 comm="tee" name="fd" dev="proc" ino=115224 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:38:55.089000 audit[9394]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7ffd6c5807ee a2=241 a3=1b6 items=1 ppid=9333 pid=9394 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:55.089000 audit: CWD cwd="/etc/service/enabled/bird/log" Jul 10 01:38:55.089000 audit: PATH item=0 name="/dev/fd/63" inode=115796 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:38:55.089000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:38:55.181000 audit[9402]: AVC avc: denied { write } for pid=9402 comm="tee" name="fd" dev="proc" ino=115230 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jul 10 01:38:55.181000 audit[9402]: SYSCALL arch=c000003e syscall=257 success=yes exit=3 a0=ffffff9c a1=7fffe80627ed a2=241 a3=1b6 items=1 ppid=9342 pid=9402 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:55.181000 audit: CWD cwd="/etc/service/enabled/confd/log" Jul 10 01:38:55.181000 audit: PATH item=0 name="/dev/fd/63" inode=115800 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 10 01:38:55.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jul 10 01:38:56.435104 systemd[1]: run-containerd-runc-k8s.io-1f6123a2530db4e70f4a3e6b47c035375d25ef2c7098e1ca906af5c39164caa1-runc.3RCzJT.mount: Deactivated successfully. Jul 10 01:38:57.494522 env[1363]: time="2025-07-10T01:38:57.494495553Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:38:57.494935 env[1363]: time="2025-07-10T01:38:57.494915636Z" level=info msg="Container to stop \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:57.560808 systemd-networkd[1114]: calida2f92a11f8: Link DOWN Jul 10 01:38:57.560812 systemd-networkd[1114]: calida2f92a11f8: Lost carrier Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.545 [INFO][9476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.545 [INFO][9476] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" iface="eth0" netns="/var/run/netns/cni-0f1c18ea-c4ae-14c5-28ad-00b513e25ee7" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.546 [INFO][9476] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" iface="eth0" netns="/var/run/netns/cni-0f1c18ea-c4ae-14c5-28ad-00b513e25ee7" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.557 [INFO][9476] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" after=12.177156ms iface="eth0" netns="/var/run/netns/cni-0f1c18ea-c4ae-14c5-28ad-00b513e25ee7" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.557 [INFO][9476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.557 [INFO][9476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.823 [INFO][9483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.826 [INFO][9483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.827 [INFO][9483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.859 [INFO][9483] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.859 [INFO][9483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.860 [INFO][9483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:38:57.865764 env[1363]: 2025-07-10 01:38:57.863 [INFO][9476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:38:57.866446 env[1363]: time="2025-07-10T01:38:57.866413504Z" level=info msg="TearDown network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" successfully" Jul 10 01:38:57.866518 env[1363]: time="2025-07-10T01:38:57.866503567Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" returns successfully" Jul 10 01:38:57.866890 env[1363]: time="2025-07-10T01:38:57.866874375Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:38:57.869960 systemd[1]: run-netns-cni\x2d0f1c18ea\x2dc4ae\x2d14c5\x2d28ad\x2d00b513e25ee7.mount: Deactivated successfully. Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.893 [WARNING][9500] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--58fd7646b9--zxwst-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"ced04dc5-79ee-4a07-a568-b0fd4007f64c", ResourceVersion:"3971", Generation:0, CreationTimestamp:time.Date(2025, time.July, 10, 1, 13, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742", Pod:"goldmane-58fd7646b9-zxwst", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calida2f92a11f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.903 [INFO][9500] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.903 [INFO][9500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" iface="eth0" netns="" Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.903 [INFO][9500] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.903 [INFO][9500] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.931 [INFO][9507] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.931 [INFO][9507] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.931 [INFO][9507] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.935 [WARNING][9507] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.935 [INFO][9507] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.936 [INFO][9507] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:38:57.939821 env[1363]: 2025-07-10 01:38:57.938 [INFO][9500] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:38:57.940305 env[1363]: time="2025-07-10T01:38:57.940281455Z" level=info msg="TearDown network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" successfully" Jul 10 01:38:57.940369 env[1363]: time="2025-07-10T01:38:57.940357098Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" returns successfully" Jul 10 01:38:58.061169 kubelet[2299]: I0710 01:38:58.060944 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle\") pod \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " Jul 10 01:38:58.061169 kubelet[2299]: I0710 01:38:58.060994 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair\") pod \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " Jul 10 01:38:58.061169 kubelet[2299]: I0710 01:38:58.061010 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6qsp\" (UniqueName: \"kubernetes.io/projected/ced04dc5-79ee-4a07-a568-b0fd4007f64c-kube-api-access-k6qsp\") pod \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " Jul 10 01:38:58.075058 kubelet[2299]: I0710 01:38:58.075034 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "ced04dc5-79ee-4a07-a568-b0fd4007f64c" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 01:38:58.096152 systemd[1]: var-lib-kubelet-pods-ced04dc5\x2d79ee\x2d4a07\x2da568\x2db0fd4007f64c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dk6qsp.mount: Deactivated successfully. Jul 10 01:38:58.098855 systemd[1]: var-lib-kubelet-pods-ced04dc5\x2d79ee\x2d4a07\x2da568\x2db0fd4007f64c-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 01:38:58.101700 kubelet[2299]: I0710 01:38:58.101674 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ced04dc5-79ee-4a07-a568-b0fd4007f64c-kube-api-access-k6qsp" (OuterVolumeSpecName: "kube-api-access-k6qsp") pod "ced04dc5-79ee-4a07-a568-b0fd4007f64c" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c"). InnerVolumeSpecName "kube-api-access-k6qsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:38:58.101813 kubelet[2299]: I0710 01:38:58.101774 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "ced04dc5-79ee-4a07-a568-b0fd4007f64c" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 01:38:58.164026 kubelet[2299]: I0710 01:38:58.163221 2299 reconciler_common.go:293] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 01:38:58.164026 kubelet[2299]: I0710 01:38:58.163252 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6qsp\" (UniqueName: \"kubernetes.io/projected/ced04dc5-79ee-4a07-a568-b0fd4007f64c-kube-api-access-k6qsp\") on node \"localhost\" DevicePath \"\"" Jul 10 01:38:58.164026 kubelet[2299]: I0710 01:38:58.163263 2299 reconciler_common.go:293] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-goldmane-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 01:38:58.495910 env[1363]: time="2025-07-10T01:38:58.495159711Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:38:58.495910 env[1363]: time="2025-07-10T01:38:58.495220793Z" level=info msg="Container to stop \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:38:58.533736 systemd-networkd[1114]: calie0bf60675d7: Link DOWN Jul 10 01:38:58.533741 systemd-networkd[1114]: calie0bf60675d7: Lost carrier Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.531 [INFO][9538] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.531 [INFO][9538] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" iface="eth0" netns="/var/run/netns/cni-402e3fde-3862-ebcc-e1e3-80efc8477cda" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.532 [INFO][9538] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" iface="eth0" netns="/var/run/netns/cni-402e3fde-3862-ebcc-e1e3-80efc8477cda" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.542 [INFO][9538] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" after=10.778462ms iface="eth0" netns="/var/run/netns/cni-402e3fde-3862-ebcc-e1e3-80efc8477cda" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.542 [INFO][9538] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.542 [INFO][9538] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.561 [INFO][9551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.563 [INFO][9551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.563 [INFO][9551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.589 [INFO][9551] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.589 [INFO][9551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.589 [INFO][9551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:38:58.592669 env[1363]: 2025-07-10 01:38:58.591 [INFO][9538] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:38:58.596386 env[1363]: time="2025-07-10T01:38:58.594591105Z" level=info msg="TearDown network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" successfully" Jul 10 01:38:58.596386 env[1363]: time="2025-07-10T01:38:58.594614164Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" returns successfully" Jul 10 01:38:58.594482 systemd[1]: run-netns-cni\x2d402e3fde\x2d3862\x2debcc\x2de1e3\x2d80efc8477cda.mount: Deactivated successfully. Jul 10 01:38:58.656503 kubelet[2299]: E0710 01:38:58.656440 2299 configmap.go:193] Couldn't get configMap calico-system/goldmane: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.656503 kubelet[2299]: E0710 01:38:58.656505 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config podName:ced04dc5-79ee-4a07-a568-b0fd4007f64c nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.656488015 +0000 UTC m=+1590.524914904 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config") pod "goldmane-58fd7646b9-zxwst" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.667230 kubelet[2299]: I0710 01:38:58.667212 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config\") pod \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\" (UID: \"ced04dc5-79ee-4a07-a568-b0fd4007f64c\") " Jul 10 01:38:58.667304 kubelet[2299]: I0710 01:38:58.667251 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwvqb\" (UniqueName: \"kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb\") pod \"3459c244-a1ae-43bc-ad86-239a6e665757\" (UID: \"3459c244-a1ae-43bc-ad86-239a6e665757\") " Jul 10 01:38:58.668222 kubelet[2299]: I0710 01:38:58.668205 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config" (OuterVolumeSpecName: "config") pod "ced04dc5-79ee-4a07-a568-b0fd4007f64c" (UID: "ced04dc5-79ee-4a07-a568-b0fd4007f64c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 01:38:58.672050 systemd[1]: var-lib-kubelet-pods-3459c244\x2da1ae\x2d43bc\x2dad86\x2d239a6e665757-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwvqb.mount: Deactivated successfully. Jul 10 01:38:58.673383 kubelet[2299]: I0710 01:38:58.673357 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb" (OuterVolumeSpecName: "kube-api-access-pwvqb") pod "3459c244-a1ae-43bc-ad86-239a6e665757" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757"). InnerVolumeSpecName "kube-api-access-pwvqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:38:58.768343 kubelet[2299]: I0710 01:38:58.768320 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwvqb\" (UniqueName: \"kubernetes.io/projected/3459c244-a1ae-43bc-ad86-239a6e665757-kube-api-access-pwvqb\") on node \"localhost\" DevicePath \"\"" Jul 10 01:38:58.768480 kubelet[2299]: I0710 01:38:58.768469 2299 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ced04dc5-79ee-4a07-a568-b0fd4007f64c-config\") on node \"localhost\" DevicePath \"\"" Jul 10 01:38:58.856985 kubelet[2299]: E0710 01:38:58.856767 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.856985 kubelet[2299]: E0710 01:38:58.856781 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:58.856985 kubelet[2299]: E0710 01:38:58.856830 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume podName:a29ef6dc-4246-436d-87dd-9c8e96247aeb nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.856816713 +0000 UTC m=+1590.725243601 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume") pod "coredns-7c65d6cfc9-4k5ld" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.856985 kubelet[2299]: E0710 01:38:58.856843 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.856837692 +0000 UTC m=+1590.725264579 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:58.861182 kubelet[2299]: E0710 01:38:58.860992 2299 configmap.go:193] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.861182 kubelet[2299]: E0710 01:38:58.861035 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume podName:3459c244-a1ae-43bc-ad86-239a6e665757 nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.861027131 +0000 UTC m=+1590.729454015 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume") pod "coredns-7c65d6cfc9-snhl5" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.861182 kubelet[2299]: E0710 01:38:58.861088 2299 secret.go:189] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:58.861182 kubelet[2299]: E0710 01:38:58.861118 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.861110231 +0000 UTC m=+1590.729537115 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync secret cache: timed out waiting for the condition Jul 10 01:38:58.863297 kubelet[2299]: E0710 01:38:58.863260 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.863297 kubelet[2299]: E0710 01:38:58.863275 2299 projected.go:194] Error preparing data for projected volume kube-api-access-47zqf for pod calico-apiserver/calico-apiserver-6d44674bc4-b2wqb: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.863297 kubelet[2299]: E0710 01:38:58.863294 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf podName:74cf1bc5-5d5a-4dc7-850a-71013984af05 nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.863287833 +0000 UTC m=+1590.731714718 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-47zqf" (UniqueName: "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf") pod "calico-apiserver-6d44674bc4-b2wqb" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.863414 kubelet[2299]: E0710 01:38:58.863364 2299 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.863414 kubelet[2299]: E0710 01:38:58.863372 2299 projected.go:194] Error preparing data for projected volume kube-api-access-r5vvj for pod calico-apiserver/calico-apiserver-6d44674bc4-w2f48: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.863414 kubelet[2299]: E0710 01:38:58.863387 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj podName:8e8146e9-6407-49b7-8cef-e26dac385734 nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.863382092 +0000 UTC m=+1590.731808973 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-r5vvj" (UniqueName: "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj") pod "calico-apiserver-6d44674bc4-w2f48" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.864577 kubelet[2299]: E0710 01:38:58.864561 2299 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.864624 kubelet[2299]: E0710 01:38:58.864585 2299 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle podName:c3f9faf5-cc25-4483-beb9-5dea29a71367 nodeName:}" failed. No retries permitted until 2025-07-10 01:39:30.864579345 +0000 UTC m=+1590.733006231 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle") pod "whisker-5bc4d9bd7d-nwwj6" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367") : failed to sync configmap cache: timed out waiting for the condition Jul 10 01:38:58.869038 kubelet[2299]: I0710 01:38:58.869021 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume\") pod \"3459c244-a1ae-43bc-ad86-239a6e665757\" (UID: \"3459c244-a1ae-43bc-ad86-239a6e665757\") " Jul 10 01:38:58.870095 kubelet[2299]: I0710 01:38:58.870078 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume" (OuterVolumeSpecName: "config-volume") pod "3459c244-a1ae-43bc-ad86-239a6e665757" (UID: "3459c244-a1ae-43bc-ad86-239a6e665757"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 01:38:58.969980 kubelet[2299]: I0710 01:38:58.969957 2299 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3459c244-a1ae-43bc-ad86-239a6e665757-config-volume\") on node \"localhost\" DevicePath \"\"" Jul 10 01:38:59.668226 systemd[1]: Started sshd@39-139.178.70.102:22-139.178.68.195:45248.service. Jul 10 01:38:59.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-139.178.70.102:22-139.178.68.195:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:59.669412 kernel: kauditd_printk_skb: 58 callbacks suppressed Jul 10 01:38:59.669456 kernel: audit: type=1130 audit(1752111539.666:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-139.178.70.102:22-139.178.68.195:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:38:59.734000 audit[9579]: USER_ACCT pid=9579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.736452 sshd[9579]: Accepted publickey for core from 139.178.68.195 port 45248 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:38:59.740688 kernel: audit: type=1101 audit(1752111539.734:759): pid=9579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.739000 audit[9579]: CRED_ACQ pid=9579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.745688 kernel: audit: type=1103 audit(1752111539.739:760): pid=9579 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.745730 kernel: audit: type=1006 audit(1752111539.739:761): pid=9579 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=41 res=1 Jul 10 01:38:59.739000 audit[9579]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedbbe4ab0 a2=3 a3=0 items=0 ppid=1 pid=9579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:59.752350 kernel: audit: type=1300 audit(1752111539.739:761): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedbbe4ab0 a2=3 a3=0 items=0 ppid=1 pid=9579 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=41 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:38:59.752391 kernel: audit: type=1327 audit(1752111539.739:761): proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:59.739000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:38:59.753811 sshd[9579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:38:59.758342 systemd[1]: Started session-41.scope. Jul 10 01:38:59.759065 systemd-logind[1351]: New session 41 of user core. Jul 10 01:38:59.761000 audit[9579]: USER_START pid=9579 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.767067 kernel: audit: type=1105 audit(1752111539.761:762): pid=9579 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.767105 kernel: audit: type=1103 audit(1752111539.765:763): pid=9582 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.765000 audit[9582]: CRED_ACQ pid=9582 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.912505 sshd[9579]: pam_unix(sshd:session): session closed for user core Jul 10 01:38:59.911000 audit[9579]: USER_END pid=9579 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.914421 systemd-logind[1351]: Session 41 logged out. Waiting for processes to exit. Jul 10 01:38:59.915343 systemd[1]: sshd@39-139.178.70.102:22-139.178.68.195:45248.service: Deactivated successfully. Jul 10 01:38:59.915847 systemd[1]: session-41.scope: Deactivated successfully. Jul 10 01:38:59.916896 systemd-logind[1351]: Removed session 41. Jul 10 01:38:59.911000 audit[9579]: CRED_DISP pid=9579 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.920106 kernel: audit: type=1106 audit(1752111539.911:764): pid=9579 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.920144 kernel: audit: type=1104 audit(1752111539.911:765): pid=9579 uid=0 auid=500 ses=41 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:38:59.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@39-139.178.70.102:22-139.178.68.195:45248 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:00.495945 env[1363]: time="2025-07-10T01:39:00.495674135Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:39:00.495945 env[1363]: time="2025-07-10T01:39:00.495739872Z" level=info msg="Container to stop \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:39:00.495945 env[1363]: time="2025-07-10T01:39:00.495784883Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:39:00.495945 env[1363]: time="2025-07-10T01:39:00.495827444Z" level=info msg="Container to stop \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:39:00.499140 kubelet[2299]: I0710 01:39:00.499115 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3459c244-a1ae-43bc-ad86-239a6e665757" path="/var/lib/kubelet/pods/3459c244-a1ae-43bc-ad86-239a6e665757/volumes" Jul 10 01:39:00.500621 kubelet[2299]: I0710 01:39:00.500606 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ced04dc5-79ee-4a07-a568-b0fd4007f64c" path="/var/lib/kubelet/pods/ced04dc5-79ee-4a07-a568-b0fd4007f64c/volumes" Jul 10 01:39:00.563663 systemd-networkd[1114]: calif65d54f8885: Link DOWN Jul 10 01:39:00.563668 systemd-networkd[1114]: calif65d54f8885: Lost carrier Jul 10 01:39:00.565313 systemd-networkd[1114]: cali96674cf1f80: Link DOWN Jul 10 01:39:00.565318 systemd-networkd[1114]: cali96674cf1f80: Lost carrier Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.562 [INFO][9617] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.563 [INFO][9617] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" iface="eth0" netns="/var/run/netns/cni-5f53ce43-8c83-7bdf-38c5-e3d0ab34b7cf" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.563 [INFO][9617] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" iface="eth0" netns="/var/run/netns/cni-5f53ce43-8c83-7bdf-38c5-e3d0ab34b7cf" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.571 [INFO][9617] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" after=8.386174ms iface="eth0" netns="/var/run/netns/cni-5f53ce43-8c83-7bdf-38c5-e3d0ab34b7cf" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.571 [INFO][9617] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.571 [INFO][9617] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.606 [INFO][9636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.606 [INFO][9636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.606 [INFO][9636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.623 [INFO][9636] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.623 [INFO][9636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.624 [INFO][9636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:00.632599 env[1363]: 2025-07-10 01:39:00.629 [INFO][9617] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:00.637374 systemd[1]: run-netns-cni\x2d5f53ce43\x2d8c83\x2d7bdf\x2d38c5\x2de3d0ab34b7cf.mount: Deactivated successfully. Jul 10 01:39:00.638042 env[1363]: time="2025-07-10T01:39:00.637954335Z" level=info msg="TearDown network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" successfully" Jul 10 01:39:00.638042 env[1363]: time="2025-07-10T01:39:00.637987478Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" returns successfully" Jul 10 01:39:00.683066 kubelet[2299]: I0710 01:39:00.682731 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle\") pod \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" (UID: \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\") " Jul 10 01:39:00.683066 kubelet[2299]: I0710 01:39:00.682821 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpnqs\" (UniqueName: \"kubernetes.io/projected/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-kube-api-access-rpnqs\") pod \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\" (UID: \"5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc\") " Jul 10 01:39:00.686761 systemd[1]: var-lib-kubelet-pods-5f01bcaa\x2dff1c\x2d4bd5\x2d988b\x2dd3c60c6abdcc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drpnqs.mount: Deactivated successfully. Jul 10 01:39:00.690081 systemd[1]: var-lib-kubelet-pods-5f01bcaa\x2dff1c\x2d4bd5\x2d988b\x2dd3c60c6abdcc-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jul 10 01:39:00.692855 kubelet[2299]: I0710 01:39:00.692835 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-kube-api-access-rpnqs" (OuterVolumeSpecName: "kube-api-access-rpnqs") pod "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc"). InnerVolumeSpecName "kube-api-access-rpnqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:39:00.697462 kubelet[2299]: I0710 01:39:00.697447 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" (UID: "5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.564 [INFO][9616] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.564 [INFO][9616] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" iface="eth0" netns="/var/run/netns/cni-751cfb90-d57c-ecad-7787-c75fcf884af1" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.564 [INFO][9616] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" iface="eth0" netns="/var/run/netns/cni-751cfb90-d57c-ecad-7787-c75fcf884af1" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.576 [INFO][9616] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" after=11.652428ms iface="eth0" netns="/var/run/netns/cni-751cfb90-d57c-ecad-7787-c75fcf884af1" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.576 [INFO][9616] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.576 [INFO][9616] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.608 [INFO][9638] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.608 [INFO][9638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.624 [INFO][9638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.724 [INFO][9638] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.724 [INFO][9638] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.725 [INFO][9638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:00.728534 env[1363]: 2025-07-10 01:39:00.726 [INFO][9616] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:00.731163 systemd[1]: run-netns-cni\x2d751cfb90\x2dd57c\x2decad\x2d7787\x2dc75fcf884af1.mount: Deactivated successfully. Jul 10 01:39:00.731944 env[1363]: time="2025-07-10T01:39:00.731681943Z" level=info msg="TearDown network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" successfully" Jul 10 01:39:00.732027 env[1363]: time="2025-07-10T01:39:00.732009466Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" returns successfully" Jul 10 01:39:00.784176 kubelet[2299]: I0710 01:39:00.784141 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5vvj\" (UniqueName: \"kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj\") pod \"8e8146e9-6407-49b7-8cef-e26dac385734\" (UID: \"8e8146e9-6407-49b7-8cef-e26dac385734\") " Jul 10 01:39:00.784176 kubelet[2299]: I0710 01:39:00.784178 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs\") pod \"8e8146e9-6407-49b7-8cef-e26dac385734\" (UID: \"8e8146e9-6407-49b7-8cef-e26dac385734\") " Jul 10 01:39:00.784318 kubelet[2299]: I0710 01:39:00.784244 2299 reconciler_common.go:293] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:00.784318 kubelet[2299]: I0710 01:39:00.784253 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpnqs\" (UniqueName: \"kubernetes.io/projected/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc-kube-api-access-rpnqs\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:00.789162 systemd[1]: var-lib-kubelet-pods-8e8146e9\x2d6407\x2d49b7\x2d8cef\x2de26dac385734-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 01:39:00.790574 kubelet[2299]: I0710 01:39:00.790551 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "8e8146e9-6407-49b7-8cef-e26dac385734" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 01:39:00.791199 kubelet[2299]: I0710 01:39:00.791178 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj" (OuterVolumeSpecName: "kube-api-access-r5vvj") pod "8e8146e9-6407-49b7-8cef-e26dac385734" (UID: "8e8146e9-6407-49b7-8cef-e26dac385734"). InnerVolumeSpecName "kube-api-access-r5vvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:39:00.885037 kubelet[2299]: I0710 01:39:00.885013 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5vvj\" (UniqueName: \"kubernetes.io/projected/8e8146e9-6407-49b7-8cef-e26dac385734-kube-api-access-r5vvj\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:00.885037 kubelet[2299]: I0710 01:39:00.885035 2299 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8e8146e9-6407-49b7-8cef-e26dac385734-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:01.637152 systemd[1]: var-lib-kubelet-pods-8e8146e9\x2d6407\x2d49b7\x2d8cef\x2de26dac385734-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr5vvj.mount: Deactivated successfully. Jul 10 01:39:02.496631 env[1363]: time="2025-07-10T01:39:02.495087310Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:39:02.496631 env[1363]: time="2025-07-10T01:39:02.495150100Z" level=info msg="Container to stop \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:39:02.498154 env[1363]: time="2025-07-10T01:39:02.498128464Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:39:02.498282 env[1363]: time="2025-07-10T01:39:02.498262803Z" level=info msg="Container to stop \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:39:02.498358 env[1363]: time="2025-07-10T01:39:02.498341241Z" level=info msg="Container to stop \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:39:02.499117 kubelet[2299]: I0710 01:39:02.499092 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc" path="/var/lib/kubelet/pods/5f01bcaa-ff1c-4bd5-988b-d3c60c6abdcc/volumes" Jul 10 01:39:02.501084 kubelet[2299]: I0710 01:39:02.501068 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e8146e9-6407-49b7-8cef-e26dac385734" path="/var/lib/kubelet/pods/8e8146e9-6407-49b7-8cef-e26dac385734/volumes" Jul 10 01:39:02.528743 systemd-networkd[1114]: cali8a6829b181a: Link DOWN Jul 10 01:39:02.528748 systemd-networkd[1114]: cali8a6829b181a: Lost carrier Jul 10 01:39:02.547072 systemd-networkd[1114]: cali7a4d6dda698: Link DOWN Jul 10 01:39:02.547077 systemd-networkd[1114]: cali7a4d6dda698: Lost carrier Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.527 [INFO][9688] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.527 [INFO][9688] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" iface="eth0" netns="/var/run/netns/cni-78a49a94-aac9-0686-56b9-0b7adecadf78" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.528 [INFO][9688] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" iface="eth0" netns="/var/run/netns/cni-78a49a94-aac9-0686-56b9-0b7adecadf78" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.533 [INFO][9688] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" after=5.768139ms iface="eth0" netns="/var/run/netns/cni-78a49a94-aac9-0686-56b9-0b7adecadf78" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.533 [INFO][9688] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.533 [INFO][9688] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.555 [INFO][9710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.555 [INFO][9710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.556 [INFO][9710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.580 [INFO][9710] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.580 [INFO][9710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.581 [INFO][9710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:02.584092 env[1363]: 2025-07-10 01:39:02.582 [INFO][9688] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:02.586183 systemd[1]: run-netns-cni\x2d78a49a94\x2daac9\x2d0686\x2d56b9\x2d0b7adecadf78.mount: Deactivated successfully. Jul 10 01:39:02.587150 env[1363]: time="2025-07-10T01:39:02.587124527Z" level=info msg="TearDown network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" successfully" Jul 10 01:39:02.587216 env[1363]: time="2025-07-10T01:39:02.587202442Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" returns successfully" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.545 [INFO][9701] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.546 [INFO][9701] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" iface="eth0" netns="/var/run/netns/cni-33d71880-8337-40f1-daec-7ca762497177" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.546 [INFO][9701] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" iface="eth0" netns="/var/run/netns/cni-33d71880-8337-40f1-daec-7ca762497177" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.560 [INFO][9701] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" after=13.934923ms iface="eth0" netns="/var/run/netns/cni-33d71880-8337-40f1-daec-7ca762497177" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.560 [INFO][9701] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.560 [INFO][9701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.601 [INFO][9719] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.601 [INFO][9719] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.601 [INFO][9719] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.680 [INFO][9719] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.680 [INFO][9719] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.681 [INFO][9719] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:02.684397 env[1363]: 2025-07-10 01:39:02.683 [INFO][9701] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:02.686411 systemd[1]: run-netns-cni\x2d33d71880\x2d8337\x2d40f1\x2ddaec\x2d7ca762497177.mount: Deactivated successfully. Jul 10 01:39:02.687007 env[1363]: time="2025-07-10T01:39:02.686981557Z" level=info msg="TearDown network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" successfully" Jul 10 01:39:02.687070 env[1363]: time="2025-07-10T01:39:02.687057084Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" returns successfully" Jul 10 01:39:02.697537 kubelet[2299]: I0710 01:39:02.697512 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs\") pod \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" (UID: \"74cf1bc5-5d5a-4dc7-850a-71013984af05\") " Jul 10 01:39:02.697646 kubelet[2299]: I0710 01:39:02.697544 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47zqf\" (UniqueName: \"kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf\") pod \"74cf1bc5-5d5a-4dc7-850a-71013984af05\" (UID: \"74cf1bc5-5d5a-4dc7-850a-71013984af05\") " Jul 10 01:39:02.704284 systemd[1]: var-lib-kubelet-pods-74cf1bc5\x2d5d5a\x2d4dc7\x2d850a\x2d71013984af05-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 10 01:39:02.707667 systemd[1]: var-lib-kubelet-pods-74cf1bc5\x2d5d5a\x2d4dc7\x2d850a\x2d71013984af05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d47zqf.mount: Deactivated successfully. Jul 10 01:39:02.708222 kubelet[2299]: I0710 01:39:02.707974 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "74cf1bc5-5d5a-4dc7-850a-71013984af05" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 01:39:02.709157 kubelet[2299]: I0710 01:39:02.709129 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf" (OuterVolumeSpecName: "kube-api-access-47zqf") pod "74cf1bc5-5d5a-4dc7-850a-71013984af05" (UID: "74cf1bc5-5d5a-4dc7-850a-71013984af05"). InnerVolumeSpecName "kube-api-access-47zqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:39:02.798336 kubelet[2299]: I0710 01:39:02.798314 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dmsf\" (UniqueName: \"kubernetes.io/projected/c3f9faf5-cc25-4483-beb9-5dea29a71367-kube-api-access-9dmsf\") pod \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" (UID: \"c3f9faf5-cc25-4483-beb9-5dea29a71367\") " Jul 10 01:39:02.798508 kubelet[2299]: I0710 01:39:02.798497 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle\") pod \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" (UID: \"c3f9faf5-cc25-4483-beb9-5dea29a71367\") " Jul 10 01:39:02.798592 kubelet[2299]: I0710 01:39:02.798583 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair\") pod \"c3f9faf5-cc25-4483-beb9-5dea29a71367\" (UID: \"c3f9faf5-cc25-4483-beb9-5dea29a71367\") " Jul 10 01:39:02.798707 kubelet[2299]: I0710 01:39:02.798697 2299 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74cf1bc5-5d5a-4dc7-850a-71013984af05-calico-apiserver-certs\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:02.798768 kubelet[2299]: I0710 01:39:02.798759 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47zqf\" (UniqueName: \"kubernetes.io/projected/74cf1bc5-5d5a-4dc7-850a-71013984af05-kube-api-access-47zqf\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:02.799851 kubelet[2299]: I0710 01:39:02.799837 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c3f9faf5-cc25-4483-beb9-5dea29a71367" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 01:39:02.801814 systemd[1]: var-lib-kubelet-pods-c3f9faf5\x2dcc25\x2d4483\x2dbeb9\x2d5dea29a71367-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9dmsf.mount: Deactivated successfully. Jul 10 01:39:02.804259 systemd[1]: var-lib-kubelet-pods-c3f9faf5\x2dcc25\x2d4483\x2dbeb9\x2d5dea29a71367-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 10 01:39:02.805170 kubelet[2299]: I0710 01:39:02.804887 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f9faf5-cc25-4483-beb9-5dea29a71367-kube-api-access-9dmsf" (OuterVolumeSpecName: "kube-api-access-9dmsf") pod "c3f9faf5-cc25-4483-beb9-5dea29a71367" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367"). InnerVolumeSpecName "kube-api-access-9dmsf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:39:02.806161 kubelet[2299]: I0710 01:39:02.805898 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c3f9faf5-cc25-4483-beb9-5dea29a71367" (UID: "c3f9faf5-cc25-4483-beb9-5dea29a71367"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 10 01:39:02.899226 kubelet[2299]: I0710 01:39:02.899175 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dmsf\" (UniqueName: \"kubernetes.io/projected/c3f9faf5-cc25-4483-beb9-5dea29a71367-kube-api-access-9dmsf\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:02.899226 kubelet[2299]: I0710 01:39:02.899207 2299 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:02.899226 kubelet[2299]: I0710 01:39:02.899216 2299 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c3f9faf5-cc25-4483-beb9-5dea29a71367-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:04.496650 kubelet[2299]: I0710 01:39:04.496609 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74cf1bc5-5d5a-4dc7-850a-71013984af05" path="/var/lib/kubelet/pods/74cf1bc5-5d5a-4dc7-850a-71013984af05/volumes" Jul 10 01:39:04.497728 kubelet[2299]: I0710 01:39:04.497708 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f9faf5-cc25-4483-beb9-5dea29a71367" path="/var/lib/kubelet/pods/c3f9faf5-cc25-4483-beb9-5dea29a71367/volumes" Jul 10 01:39:04.913594 systemd[1]: Started sshd@40-139.178.70.102:22-139.178.68.195:45258.service. Jul 10 01:39:04.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-139.178.70.102:22-139.178.68.195:45258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:04.917409 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 01:39:04.917451 kernel: audit: type=1130 audit(1752111544.913:767): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-139.178.70.102:22-139.178.68.195:45258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:04.971000 audit[9786]: USER_ACCT pid=9786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:04.971951 sshd[9786]: Accepted publickey for core from 139.178.68.195 port 45258 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:04.976715 kernel: audit: type=1101 audit(1752111544.971:768): pid=9786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:04.976000 audit[9786]: CRED_ACQ pid=9786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:04.981228 sshd[9786]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:04.983569 kernel: audit: type=1103 audit(1752111544.976:769): pid=9786 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:04.983610 kernel: audit: type=1006 audit(1752111544.977:770): pid=9786 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=42 res=1 Jul 10 01:39:04.977000 audit[9786]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb6cd5190 a2=3 a3=0 items=0 ppid=1 pid=9786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:04.977000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:04.989607 kernel: audit: type=1300 audit(1752111544.977:770): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffdb6cd5190 a2=3 a3=0 items=0 ppid=1 pid=9786 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=42 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:04.989653 kernel: audit: type=1327 audit(1752111544.977:770): proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:04.991239 systemd[1]: Started session-42.scope. Jul 10 01:39:04.992013 systemd-logind[1351]: New session 42 of user core. Jul 10 01:39:04.994000 audit[9786]: USER_START pid=9786 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:04.998000 audit[9789]: CRED_ACQ pid=9789 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:05.002515 kernel: audit: type=1105 audit(1752111544.994:771): pid=9786 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:05.002540 kernel: audit: type=1103 audit(1752111544.998:772): pid=9789 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:05.122001 sshd[9786]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:05.122000 audit[9786]: USER_END pid=9786 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:05.126473 systemd[1]: sshd@40-139.178.70.102:22-139.178.68.195:45258.service: Deactivated successfully. Jul 10 01:39:05.126692 kernel: audit: type=1106 audit(1752111545.122:773): pid=9786 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:05.127136 systemd[1]: session-42.scope: Deactivated successfully. Jul 10 01:39:05.122000 audit[9786]: CRED_DISP pid=9786 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:05.127254 systemd-logind[1351]: Session 42 logged out. Waiting for processes to exit. Jul 10 01:39:05.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@40-139.178.70.102:22-139.178.68.195:45258 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:05.130735 kernel: audit: type=1104 audit(1752111545.122:774): pid=9786 uid=0 auid=500 ses=42 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:05.130586 systemd-logind[1351]: Removed session 42. Jul 10 01:39:05.495593 env[1363]: time="2025-07-10T01:39:05.495547437Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:39:05.495920 env[1363]: time="2025-07-10T01:39:05.495607423Z" level=info msg="Container to stop \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 10 01:39:05.526657 systemd-networkd[1114]: cali7006602a141: Link DOWN Jul 10 01:39:05.526662 systemd-networkd[1114]: cali7006602a141: Lost carrier Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.525 [INFO][9810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.526 [INFO][9810] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" iface="eth0" netns="/var/run/netns/cni-e4584383-8d3a-f37d-3f9e-191559ae94a9" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.526 [INFO][9810] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" iface="eth0" netns="/var/run/netns/cni-e4584383-8d3a-f37d-3f9e-191559ae94a9" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.534 [INFO][9810] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" after=8.592608ms iface="eth0" netns="/var/run/netns/cni-e4584383-8d3a-f37d-3f9e-191559ae94a9" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.534 [INFO][9810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.534 [INFO][9810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.550 [INFO][9817] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.550 [INFO][9817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.550 [INFO][9817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.572 [INFO][9817] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.572 [INFO][9817] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.573 [INFO][9817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:05.577390 env[1363]: 2025-07-10 01:39:05.575 [INFO][9810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:05.579580 systemd[1]: run-netns-cni\x2de4584383\x2d8d3a\x2df37d\x2d3f9e\x2d191559ae94a9.mount: Deactivated successfully. Jul 10 01:39:05.580762 env[1363]: time="2025-07-10T01:39:05.580741682Z" level=info msg="TearDown network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" successfully" Jul 10 01:39:05.581710 env[1363]: time="2025-07-10T01:39:05.580894744Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" returns successfully" Jul 10 01:39:05.619470 kubelet[2299]: I0710 01:39:05.619432 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume\") pod \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" (UID: \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\") " Jul 10 01:39:05.619765 kubelet[2299]: I0710 01:39:05.619536 2299 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4bl2z\" (UniqueName: \"kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z\") pod \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\" (UID: \"a29ef6dc-4246-436d-87dd-9c8e96247aeb\") " Jul 10 01:39:05.622586 kubelet[2299]: I0710 01:39:05.622444 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume" (OuterVolumeSpecName: "config-volume") pod "a29ef6dc-4246-436d-87dd-9c8e96247aeb" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 10 01:39:05.624365 systemd[1]: var-lib-kubelet-pods-a29ef6dc\x2d4246\x2d436d\x2d87dd\x2d9c8e96247aeb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4bl2z.mount: Deactivated successfully. Jul 10 01:39:05.627382 kubelet[2299]: I0710 01:39:05.627357 2299 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z" (OuterVolumeSpecName: "kube-api-access-4bl2z") pod "a29ef6dc-4246-436d-87dd-9c8e96247aeb" (UID: "a29ef6dc-4246-436d-87dd-9c8e96247aeb"). InnerVolumeSpecName "kube-api-access-4bl2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 10 01:39:05.719730 kubelet[2299]: I0710 01:39:05.719701 2299 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a29ef6dc-4246-436d-87dd-9c8e96247aeb-config-volume\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:05.719730 kubelet[2299]: I0710 01:39:05.719722 2299 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4bl2z\" (UniqueName: \"kubernetes.io/projected/a29ef6dc-4246-436d-87dd-9c8e96247aeb-kube-api-access-4bl2z\") on node \"localhost\" DevicePath \"\"" Jul 10 01:39:06.495426 kubelet[2299]: I0710 01:39:06.495406 2299 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a29ef6dc-4246-436d-87dd-9c8e96247aeb" path="/var/lib/kubelet/pods/a29ef6dc-4246-436d-87dd-9c8e96247aeb/volumes" Jul 10 01:39:07.219000 audit[9869]: NETFILTER_CFG table=filter:167 family=2 entries=10 op=nft_register_rule pid=9869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:07.219000 audit[9869]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd51b96920 a2=0 a3=7ffd51b9690c items=0 ppid=2398 pid=9869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:07.219000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:07.224000 audit[9869]: NETFILTER_CFG table=nat:168 family=2 entries=96 op=nft_register_rule pid=9869 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:07.224000 audit[9869]: SYSCALL arch=c000003e syscall=46 success=yes exit=31708 a0=3 a1=7ffd51b96920 a2=0 a3=7ffd51b9690c items=0 ppid=2398 pid=9869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:07.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:07.238000 audit[9871]: NETFILTER_CFG table=filter:169 family=2 entries=10 op=nft_register_rule pid=9871 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:07.238000 audit[9871]: SYSCALL arch=c000003e syscall=46 success=yes exit=3760 a0=3 a1=7ffd31231610 a2=0 a3=7ffd312315fc items=0 ppid=2398 pid=9871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:07.238000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:07.242000 audit[9871]: NETFILTER_CFG table=nat:170 family=2 entries=24 op=nft_register_rule pid=9871 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:07.242000 audit[9871]: SYSCALL arch=c000003e syscall=46 success=yes exit=7308 a0=3 a1=7ffd31231610 a2=0 a3=7ffd312315fc items=0 ppid=2398 pid=9871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:07.242000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:08.247000 audit[9891]: NETFILTER_CFG table=filter:171 family=2 entries=9 op=nft_register_rule pid=9891 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:08.247000 audit[9891]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7fffb65d3cf0 a2=0 a3=7fffb65d3cdc items=0 ppid=2398 pid=9891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.247000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:08.252000 audit[9891]: NETFILTER_CFG table=nat:172 family=2 entries=31 op=nft_register_chain pid=9891 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:08.252000 audit[9891]: SYSCALL arch=c000003e syscall=46 success=yes exit=10884 a0=3 a1=7fffb65d3cf0 a2=0 a3=7fffb65d3cdc items=0 ppid=2398 pid=9891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.252000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:08.542556 kubelet[2299]: I0710 01:39:08.541716 2299 scope.go:117] "RemoveContainer" containerID="846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c" Jul 10 01:39:08.545995 env[1363]: time="2025-07-10T01:39:08.545970670Z" level=info msg="RemoveContainer for \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\"" Jul 10 01:39:08.548343 env[1363]: time="2025-07-10T01:39:08.548318115Z" level=info msg="RemoveContainer for \"846639043e3e3375edb5ca693ab2e7bd950e8cc7bfe6f9bd5620ee47769cd79c\" returns successfully" Jul 10 01:39:08.548523 kubelet[2299]: I0710 01:39:08.548493 2299 scope.go:117] "RemoveContainer" containerID="0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28" Jul 10 01:39:08.549454 env[1363]: time="2025-07-10T01:39:08.549437303Z" level=info msg="RemoveContainer for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\"" Jul 10 01:39:08.551620 env[1363]: time="2025-07-10T01:39:08.551597614Z" level=info msg="RemoveContainer for \"0a7b9b0ea47aa889b6d5597d41d9f5ecf3ccc392f2d5f74cd7be134b392cec28\" returns successfully" Jul 10 01:39:08.551861 kubelet[2299]: I0710 01:39:08.551736 2299 scope.go:117] "RemoveContainer" containerID="a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b" Jul 10 01:39:08.553816 env[1363]: time="2025-07-10T01:39:08.553798809Z" level=info msg="RemoveContainer for \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\"" Jul 10 01:39:08.555256 env[1363]: time="2025-07-10T01:39:08.555241408Z" level=info msg="RemoveContainer for \"a200d687dc84da70963635919a56406c3ab3b1d9e93e3d78979e61b2e309695b\" returns successfully" Jul 10 01:39:08.555422 kubelet[2299]: I0710 01:39:08.555398 2299 scope.go:117] "RemoveContainer" containerID="9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755" Jul 10 01:39:08.556216 env[1363]: time="2025-07-10T01:39:08.556196096Z" level=info msg="RemoveContainer for \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\"" Jul 10 01:39:08.557715 env[1363]: time="2025-07-10T01:39:08.557696057Z" level=info msg="RemoveContainer for \"9613f2d808f30fc610330008107d20687f76b9c5168e9ed86b6bbe227c241755\" returns successfully" Jul 10 01:39:08.557802 kubelet[2299]: I0710 01:39:08.557788 2299 scope.go:117] "RemoveContainer" containerID="1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591" Jul 10 01:39:08.558356 env[1363]: time="2025-07-10T01:39:08.558341256Z" level=info msg="RemoveContainer for \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\"" Jul 10 01:39:08.559567 env[1363]: time="2025-07-10T01:39:08.559549122Z" level=info msg="RemoveContainer for \"1e5279231269dfc537e88aa5fdcd50f6d78342ad66bf00ac46bda4314ea2b591\" returns successfully" Jul 10 01:39:08.559693 kubelet[2299]: I0710 01:39:08.559678 2299 scope.go:117] "RemoveContainer" containerID="915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66" Jul 10 01:39:08.560752 env[1363]: time="2025-07-10T01:39:08.560732693Z" level=info msg="RemoveContainer for \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\"" Jul 10 01:39:08.562502 env[1363]: time="2025-07-10T01:39:08.562476526Z" level=info msg="RemoveContainer for \"915c58c03353ee54736489abf3797867734b173634b282af0191665aad606e66\" returns successfully" Jul 10 01:39:08.562636 kubelet[2299]: I0710 01:39:08.562620 2299 scope.go:117] "RemoveContainer" containerID="8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4" Jul 10 01:39:08.563439 env[1363]: time="2025-07-10T01:39:08.563419805Z" level=info msg="RemoveContainer for \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\"" Jul 10 01:39:08.568431 env[1363]: time="2025-07-10T01:39:08.568194250Z" level=info msg="RemoveContainer for \"8b834cb8605645f5c7c427dfcf3dcbb8b3ef5c7c0f8f023ef0d1fbc5a5c10bd4\" returns successfully" Jul 10 01:39:08.568562 kubelet[2299]: I0710 01:39:08.568544 2299 scope.go:117] "RemoveContainer" containerID="225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a" Jul 10 01:39:08.569407 env[1363]: time="2025-07-10T01:39:08.569391484Z" level=info msg="RemoveContainer for \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\"" Jul 10 01:39:08.570701 env[1363]: time="2025-07-10T01:39:08.570687027Z" level=info msg="RemoveContainer for \"225609f895efa2c766fff4c357d326a94aeff3f2dc9267546e9096e0fdcbf87a\" returns successfully" Jul 10 01:39:08.572813 env[1363]: time="2025-07-10T01:39:08.572788998Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.646 [WARNING][9909] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.646 [INFO][9909] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.646 [INFO][9909] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" iface="eth0" netns="" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.646 [INFO][9909] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.646 [INFO][9909] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.663 [INFO][9920] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.663 [INFO][9920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.663 [INFO][9920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.670 [WARNING][9920] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.670 [INFO][9920] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.671 [INFO][9920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:08.676741 env[1363]: 2025-07-10 01:39:08.673 [INFO][9909] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.677314 env[1363]: time="2025-07-10T01:39:08.677076196Z" level=info msg="TearDown network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" successfully" Jul 10 01:39:08.677314 env[1363]: time="2025-07-10T01:39:08.677096199Z" level=info msg="StopPodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" returns successfully" Jul 10 01:39:08.677508 env[1363]: time="2025-07-10T01:39:08.677469813Z" level=info msg="RemovePodSandbox for \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:39:08.677593 env[1363]: time="2025-07-10T01:39:08.677561949Z" level=info msg="Forcibly stopping sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\"" Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.688000 audit: BPF prog-id=32 op=LOAD Jul 10 01:39:08.688000 audit[9945]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd52537210 a2=98 a3=1fffffffffffffff items=0 ppid=9892 pid=9945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.688000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:39:08.688000 audit: BPF prog-id=32 op=UNLOAD Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit: BPF prog-id=33 op=LOAD Jul 10 01:39:08.689000 audit[9945]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd525370f0 a2=94 a3=3 items=0 ppid=9892 pid=9945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:39:08.689000 audit: BPF prog-id=33 op=UNLOAD Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { bpf } for pid=9945 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit: BPF prog-id=34 op=LOAD Jul 10 01:39:08.689000 audit[9945]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd52537130 a2=94 a3=7ffd52537310 items=0 ppid=9892 pid=9945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:39:08.689000 audit: BPF prog-id=34 op=UNLOAD Jul 10 01:39:08.689000 audit[9945]: AVC avc: denied { perfmon } for pid=9945 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.689000 audit[9945]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=0 a1=7ffd52537200 a2=50 a3=a000000085 items=0 ppid=9892 pid=9945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.689000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.690000 audit: BPF prog-id=35 op=LOAD Jul 10 01:39:08.690000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fffdcce6e80 a2=98 a3=3 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.690000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.690000 audit: BPF prog-id=35 op=UNLOAD Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit: BPF prog-id=36 op=LOAD Jul 10 01:39:08.691000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffdcce6c70 a2=94 a3=54428f items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.691000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.691000 audit: BPF prog-id=36 op=UNLOAD Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.691000 audit: BPF prog-id=37 op=LOAD Jul 10 01:39:08.691000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffdcce6ca0 a2=94 a3=2 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.691000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.691000 audit: BPF prog-id=37 op=UNLOAD Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.708 [WARNING][9939] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.708 [INFO][9939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.708 [INFO][9939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" iface="eth0" netns="" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.708 [INFO][9939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.708 [INFO][9939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.728 [INFO][9950] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.728 [INFO][9950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.728 [INFO][9950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.732 [WARNING][9950] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.732 [INFO][9950] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" HandleID="k8s-pod-network.faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Workload="localhost-k8s-calico--apiserver--6d44674bc4--b2wqb-eth0" Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.733 [INFO][9950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:08.741445 env[1363]: 2025-07-10 01:39:08.737 [INFO][9939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856" Jul 10 01:39:08.741445 env[1363]: time="2025-07-10T01:39:08.741006071Z" level=info msg="TearDown network for sandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" successfully" Jul 10 01:39:08.743428 env[1363]: time="2025-07-10T01:39:08.743411132Z" level=info msg="RemovePodSandbox \"faf470fc452c2f07757eeeb2a3a0f4d17d9a92da7cefb8e597308394b6823856\" returns successfully" Jul 10 01:39:08.743819 env[1363]: time="2025-07-10T01:39:08.743799073Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit: BPF prog-id=38 op=LOAD Jul 10 01:39:08.780000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7fffdcce6b60 a2=94 a3=1 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.780000 audit: BPF prog-id=38 op=UNLOAD Jul 10 01:39:08.780000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.780000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7fffdcce6c30 a2=50 a3=7fffdcce6d10 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.780000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.789000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.789000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdcce6b70 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.789000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdcce6ba0 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdcce6ab0 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdcce6bc0 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdcce6ba0 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdcce6b90 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdcce6bc0 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdcce6ba0 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdcce6bc0 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fffdcce6b90 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7fffdcce6c00 a2=28 a3=0 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffdcce69b0 a2=50 a3=1 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit: BPF prog-id=39 op=LOAD Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fffdcce69b0 a2=94 a3=5 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit: BPF prog-id=39 op=UNLOAD Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7fffdcce6a60 a2=50 a3=1 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7fffdcce6b80 a2=4 a3=38 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { confidentiality } for pid=9946 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffdcce6bd0 a2=94 a3=6 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { confidentiality } for pid=9946 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffdcce6380 a2=94 a3=88 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { perfmon } for pid=9946 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { bpf } for pid=9946 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.790000 audit[9946]: AVC avc: denied { confidentiality } for pid=9946 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:39:08.790000 audit[9946]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7fffdcce6380 a2=94 a3=88 items=0 ppid=9892 pid=9946 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.790000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.774 [WARNING][9966] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.775 [INFO][9966] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.775 [INFO][9966] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" iface="eth0" netns="" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.775 [INFO][9966] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.775 [INFO][9966] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.807 [INFO][9973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.807 [INFO][9973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.807 [INFO][9973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.811 [WARNING][9973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.811 [INFO][9973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.812 [INFO][9973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:08.815928 env[1363]: 2025-07-10 01:39:08.814 [INFO][9966] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.817010 env[1363]: time="2025-07-10T01:39:08.816193887Z" level=info msg="TearDown network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" successfully" Jul 10 01:39:08.817010 env[1363]: time="2025-07-10T01:39:08.816214243Z" level=info msg="StopPodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" returns successfully" Jul 10 01:39:08.817010 env[1363]: time="2025-07-10T01:39:08.816545891Z" level=info msg="RemovePodSandbox for \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:39:08.817010 env[1363]: time="2025-07-10T01:39:08.816563512Z" level=info msg="Forcibly stopping sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\"" Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit: BPF prog-id=40 op=LOAD Jul 10 01:39:08.824000 audit[9994]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeaf110700 a2=98 a3=1999999999999999 items=0 ppid=9892 pid=9994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.824000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 01:39:08.824000 audit: BPF prog-id=40 op=UNLOAD Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit: BPF prog-id=41 op=LOAD Jul 10 01:39:08.824000 audit[9994]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeaf1105e0 a2=94 a3=ffff items=0 ppid=9892 pid=9994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.824000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 01:39:08.824000 audit: BPF prog-id=41 op=UNLOAD Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { perfmon } for pid=9994 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit[9994]: AVC avc: denied { bpf } for pid=9994 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.824000 audit: BPF prog-id=42 op=LOAD Jul 10 01:39:08.824000 audit[9994]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffeaf110620 a2=94 a3=7ffeaf110800 items=0 ppid=9892 pid=9994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.824000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jul 10 01:39:08.824000 audit: BPF prog-id=42 op=UNLOAD Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.859 [WARNING][9990] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.859 [INFO][9990] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.859 [INFO][9990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" iface="eth0" netns="" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.859 [INFO][9990] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.859 [INFO][9990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.890 [INFO][10007] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.890 [INFO][10007] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.890 [INFO][10007] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.894 [WARNING][10007] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.894 [INFO][10007] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" HandleID="k8s-pod-network.6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Workload="localhost-k8s-calico--kube--controllers--5477ff879d--j2p5q-eth0" Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.894 [INFO][10007] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:08.897875 env[1363]: 2025-07-10 01:39:08.896 [INFO][9990] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b" Jul 10 01:39:08.898289 env[1363]: time="2025-07-10T01:39:08.898265022Z" level=info msg="TearDown network for sandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" successfully" Jul 10 01:39:08.900477 env[1363]: time="2025-07-10T01:39:08.900462480Z" level=info msg="RemovePodSandbox \"6503e247e079e9b1040ac4f9c23ba0f9f2bd42e5328355dba03928c27dd6e73b\" returns successfully" Jul 10 01:39:08.907812 env[1363]: time="2025-07-10T01:39:08.907783970Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.913000 audit: BPF prog-id=43 op=LOAD Jul 10 01:39:08.913000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8e7a7b50 a2=98 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.913000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.914000 audit: BPF prog-id=43 op=UNLOAD Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit: BPF prog-id=44 op=LOAD Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8e7a7960 a2=94 a3=54428f items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit: BPF prog-id=44 op=UNLOAD Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit: BPF prog-id=45 op=LOAD Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7fff8e7a7990 a2=94 a3=2 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit: BPF prog-id=45 op=UNLOAD Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8e7a7860 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8e7a7890 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8e7a77a0 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8e7a78b0 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8e7a7890 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8e7a7880 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8e7a78b0 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8e7a7890 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8e7a78b0 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7fff8e7a7880 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=12 a1=7fff8e7a78f0 a2=28 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.918000 audit: BPF prog-id=46 op=LOAD Jul 10 01:39:08.918000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8e7a7760 a2=94 a3=0 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.918000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.918000 audit: BPF prog-id=46 op=UNLOAD Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=0 a1=7fff8e7a7750 a2=50 a3=2800 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.919000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=0 a1=7fff8e7a7750 a2=50 a3=2800 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.919000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit: BPF prog-id=47 op=LOAD Jul 10 01:39:08.919000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8e7a6f70 a2=94 a3=2 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.919000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.919000 audit: BPF prog-id=47 op=UNLOAD Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { perfmon } for pid=10034 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit[10034]: AVC avc: denied { bpf } for pid=10034 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.919000 audit: BPF prog-id=48 op=LOAD Jul 10 01:39:08.919000 audit[10034]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7fff8e7a7070 a2=94 a3=30 items=0 ppid=9892 pid=10034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.919000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit: BPF prog-id=49 op=LOAD Jul 10 01:39:08.935000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=3 a0=5 a1=7ffd83661310 a2=98 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:08.935000 audit: BPF prog-id=49 op=UNLOAD Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit: BPF prog-id=50 op=LOAD Jul 10 01:39:08.935000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd83661100 a2=94 a3=54428f items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:08.935000 audit: BPF prog-id=50 op=UNLOAD Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:08.935000 audit: BPF prog-id=51 op=LOAD Jul 10 01:39:08.935000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd83661130 a2=94 a3=2 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:08.935000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:08.935000 audit: BPF prog-id=51 op=UNLOAD Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:08.956 [WARNING][10035] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:08.956 [INFO][10035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:08.956 [INFO][10035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" iface="eth0" netns="" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:08.956 [INFO][10035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:08.956 [INFO][10035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:09.019 [INFO][10058] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:09.019 [INFO][10058] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:09.019 [INFO][10058] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:09.032 [WARNING][10058] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:09.032 [INFO][10058] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:09.033 [INFO][10058] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.036142 env[1363]: 2025-07-10 01:39:09.034 [INFO][10035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.036142 env[1363]: time="2025-07-10T01:39:09.036064467Z" level=info msg="TearDown network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" successfully" Jul 10 01:39:09.036142 env[1363]: time="2025-07-10T01:39:09.036087311Z" level=info msg="StopPodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" returns successfully" Jul 10 01:39:09.037103 env[1363]: time="2025-07-10T01:39:09.036781037Z" level=info msg="RemovePodSandbox for \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:39:09.037103 env[1363]: time="2025-07-10T01:39:09.036800490Z" level=info msg="Forcibly stopping sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\"" Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit: BPF prog-id=52 op=LOAD Jul 10 01:39:09.084000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=5 a1=7ffd83660ff0 a2=94 a3=1 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.084000 audit: BPF prog-id=52 op=UNLOAD Jul 10 01:39:09.084000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.084000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=0 a1=7ffd836610c0 a2=50 a3=7ffd836611a0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.084000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd83661000 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd83661030 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd83660f40 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd83661050 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd83661030 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd83661020 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd83661050 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd83661030 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd83661050 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=12 a1=7ffd83661020 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=4 a0=12 a1=7ffd83661090 a2=28 a3=0 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd83660e40 a2=50 a3=1 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit: BPF prog-id=53 op=LOAD Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=6 a0=5 a1=7ffd83660e40 a2=94 a3=5 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit: BPF prog-id=53 op=UNLOAD Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=5 a0=0 a1=7ffd83660ef0 a2=50 a3=1 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=16 a1=7ffd83661010 a2=4 a3=38 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.117000 audit[10051]: AVC avc: denied { confidentiality } for pid=10051 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:39:09.117000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd83661060 a2=94 a3=6 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.117000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { confidentiality } for pid=10051 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:39:09.120000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd83660810 a2=94 a3=88 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { perfmon } for pid=10051 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.120000 audit[10051]: AVC avc: denied { confidentiality } for pid=10051 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Jul 10 01:39:09.120000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=no exit=-22 a0=5 a1=7ffd83660810 a2=94 a3=88 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.120000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.193000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.193000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd83662240 a2=10 a3=f8f00800 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.193000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.194000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.194000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd836620e0 a2=10 a3=3 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.194000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.194000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd83662080 a2=10 a3=3 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.194000 audit[10051]: AVC avc: denied { bpf } for pid=10051 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Jul 10 01:39:09.194000 audit[10051]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=7ffd83662080 a2=10 a3=7 items=0 ppid=9892 pid=10051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.194000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jul 10 01:39:09.202000 audit: BPF prog-id=48 op=UNLOAD Jul 10 01:39:09.202000 audit[9177]: SYSCALL arch=c000003e syscall=35 success=yes exit=0 a0=c000083f18 a1=0 a2=0 a3=7fffdd7f0080 items=0 ppid=2030 pid=9177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.202000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3133392E3137382E37302E313032002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.148 [WARNING][10078] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.148 [INFO][10078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.148 [INFO][10078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" iface="eth0" netns="" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.148 [INFO][10078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.148 [INFO][10078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.199 [INFO][10096] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.199 [INFO][10096] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.201 [INFO][10096] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.206 [WARNING][10096] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.206 [INFO][10096] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" HandleID="k8s-pod-network.d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.208 [INFO][10096] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.213914 env[1363]: 2025-07-10 01:39:09.212 [INFO][10078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742" Jul 10 01:39:09.214354 env[1363]: time="2025-07-10T01:39:09.214324083Z" level=info msg="TearDown network for sandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" successfully" Jul 10 01:39:09.216462 env[1363]: time="2025-07-10T01:39:09.216446024Z" level=info msg="RemovePodSandbox \"d50fd4405e1f03ed2cdfbef802c2261b6b6ef77dbd652ba6fa35f73abffba742\" returns successfully" Jul 10 01:39:09.216859 env[1363]: time="2025-07-10T01:39:09.216843547Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.262 [WARNING][10115] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.262 [INFO][10115] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.262 [INFO][10115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" iface="eth0" netns="" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.262 [INFO][10115] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.262 [INFO][10115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.288 [INFO][10128] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.289 [INFO][10128] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.289 [INFO][10128] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.294 [WARNING][10128] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.295 [INFO][10128] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.295 [INFO][10128] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.300070 env[1363]: 2025-07-10 01:39:09.298 [INFO][10115] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.300446 env[1363]: time="2025-07-10T01:39:09.300424497Z" level=info msg="TearDown network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" successfully" Jul 10 01:39:09.300507 env[1363]: time="2025-07-10T01:39:09.300494570Z" level=info msg="StopPodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" returns successfully" Jul 10 01:39:09.300974 env[1363]: time="2025-07-10T01:39:09.300939167Z" level=info msg="RemovePodSandbox for \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:39:09.301052 env[1363]: time="2025-07-10T01:39:09.301029237Z" level=info msg="Forcibly stopping sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\"" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.374 [WARNING][10150] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" WorkloadEndpoint="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.374 [INFO][10150] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.374 [INFO][10150] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" iface="eth0" netns="" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.374 [INFO][10150] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.374 [INFO][10150] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.415 [INFO][10169] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.415 [INFO][10169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.416 [INFO][10169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.423 [WARNING][10169] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.423 [INFO][10169] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" HandleID="k8s-pod-network.3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Workload="localhost-k8s-goldmane--58fd7646b9--zxwst-eth0" Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.424 [INFO][10169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.428413 env[1363]: 2025-07-10 01:39:09.425 [INFO][10150] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb" Jul 10 01:39:09.429319 env[1363]: time="2025-07-10T01:39:09.429293464Z" level=info msg="TearDown network for sandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" successfully" Jul 10 01:39:09.431813 env[1363]: time="2025-07-10T01:39:09.431798482Z" level=info msg="RemovePodSandbox \"3e37249528bb3e0be92befd65b6647a34c4c854d8942b3cdda871096eeadbddb\" returns successfully" Jul 10 01:39:09.433284 env[1363]: time="2025-07-10T01:39:09.433269736Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:39:09.437000 audit[10192]: NETFILTER_CFG table=filter:173 family=2 entries=226 op=nft_register_rule pid=10192 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:39:09.437000 audit[10192]: SYSCALL arch=c000003e syscall=46 success=yes exit=5484 a0=3 a1=7ffd4683d2d0 a2=0 a3=7ffd4683d2bc items=0 ppid=9892 pid=10192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.437000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:39:09.438000 audit[10192]: NETFILTER_CFG table=filter:174 family=2 entries=36 op=nft_unregister_chain pid=10192 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jul 10 01:39:09.438000 audit[10192]: SYSCALL arch=c000003e syscall=46 success=yes exit=4928 a0=3 a1=7ffd4683d2d0 a2=0 a3=7ffd4683d2bc items=0 ppid=9892 pid=10192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:09.438000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.470 [WARNING][10205] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.471 [INFO][10205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.471 [INFO][10205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" iface="eth0" netns="" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.471 [INFO][10205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.471 [INFO][10205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.485 [INFO][10215] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.486 [INFO][10215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.486 [INFO][10215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.489 [WARNING][10215] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.489 [INFO][10215] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.490 [INFO][10215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.493942 env[1363]: 2025-07-10 01:39:09.492 [INFO][10205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.494478 env[1363]: time="2025-07-10T01:39:09.494455892Z" level=info msg="TearDown network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" successfully" Jul 10 01:39:09.494543 env[1363]: time="2025-07-10T01:39:09.494530983Z" level=info msg="StopPodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" returns successfully" Jul 10 01:39:09.494938 env[1363]: time="2025-07-10T01:39:09.494922913Z" level=info msg="RemovePodSandbox for \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:39:09.495066 env[1363]: time="2025-07-10T01:39:09.495024278Z" level=info msg="Forcibly stopping sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\"" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.520 [WARNING][10229] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.520 [INFO][10229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.520 [INFO][10229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" iface="eth0" netns="" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.520 [INFO][10229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.520 [INFO][10229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.536 [INFO][10236] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.536 [INFO][10236] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.536 [INFO][10236] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.540 [WARNING][10236] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.540 [INFO][10236] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" HandleID="k8s-pod-network.47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Workload="localhost-k8s-coredns--7c65d6cfc9--snhl5-eth0" Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.540 [INFO][10236] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.543825 env[1363]: 2025-07-10 01:39:09.542 [INFO][10229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714" Jul 10 01:39:09.544470 env[1363]: time="2025-07-10T01:39:09.544027805Z" level=info msg="TearDown network for sandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" successfully" Jul 10 01:39:09.545923 env[1363]: time="2025-07-10T01:39:09.545909439Z" level=info msg="RemovePodSandbox \"47b065192ffd0b7504649af3406bb653c598c34d33430dd9e03fcdcb34aca714\" returns successfully" Jul 10 01:39:09.546284 env[1363]: time="2025-07-10T01:39:09.546271314Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.571 [WARNING][10251] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.571 [INFO][10251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.572 [INFO][10251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" iface="eth0" netns="" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.572 [INFO][10251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.572 [INFO][10251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.586 [INFO][10258] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.586 [INFO][10258] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.587 [INFO][10258] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.590 [WARNING][10258] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.590 [INFO][10258] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.591 [INFO][10258] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.593996 env[1363]: 2025-07-10 01:39:09.592 [INFO][10251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.594482 env[1363]: time="2025-07-10T01:39:09.594458333Z" level=info msg="TearDown network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" successfully" Jul 10 01:39:09.594547 env[1363]: time="2025-07-10T01:39:09.594534441Z" level=info msg="StopPodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" returns successfully" Jul 10 01:39:09.594960 env[1363]: time="2025-07-10T01:39:09.594941900Z" level=info msg="RemovePodSandbox for \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:39:09.595005 env[1363]: time="2025-07-10T01:39:09.594964384Z" level=info msg="Forcibly stopping sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\"" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.622 [WARNING][10272] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" WorkloadEndpoint="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.622 [INFO][10272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.622 [INFO][10272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" iface="eth0" netns="" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.622 [INFO][10272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.622 [INFO][10272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.638 [INFO][10279] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.638 [INFO][10279] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.638 [INFO][10279] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.641 [WARNING][10279] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.641 [INFO][10279] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" HandleID="k8s-pod-network.47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Workload="localhost-k8s-whisker--5bc4d9bd7d--nwwj6-eth0" Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.642 [INFO][10279] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.645486 env[1363]: 2025-07-10 01:39:09.643 [INFO][10272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff" Jul 10 01:39:09.645867 env[1363]: time="2025-07-10T01:39:09.645843896Z" level=info msg="TearDown network for sandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" successfully" Jul 10 01:39:09.667735 env[1363]: time="2025-07-10T01:39:09.667708149Z" level=info msg="RemovePodSandbox \"47772743ab806984f8c08f88def502ffe4f7fc6e574fb3f0d5b58c702f3e79ff\" returns successfully" Jul 10 01:39:09.668260 env[1363]: time="2025-07-10T01:39:09.668241198Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.690 [WARNING][10293] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.690 [INFO][10293] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.690 [INFO][10293] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" iface="eth0" netns="" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.690 [INFO][10293] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.690 [INFO][10293] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.708 [INFO][10301] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.708 [INFO][10301] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.708 [INFO][10301] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.711 [WARNING][10301] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.711 [INFO][10301] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.712 [INFO][10301] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.715105 env[1363]: 2025-07-10 01:39:09.713 [INFO][10293] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.716265 env[1363]: time="2025-07-10T01:39:09.716241671Z" level=info msg="TearDown network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" successfully" Jul 10 01:39:09.716318 env[1363]: time="2025-07-10T01:39:09.716306050Z" level=info msg="StopPodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" returns successfully" Jul 10 01:39:09.716801 env[1363]: time="2025-07-10T01:39:09.716781439Z" level=info msg="RemovePodSandbox for \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:39:09.716841 env[1363]: time="2025-07-10T01:39:09.716805126Z" level=info msg="Forcibly stopping sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\"" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.739 [WARNING][10315] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" WorkloadEndpoint="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.739 [INFO][10315] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.739 [INFO][10315] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" iface="eth0" netns="" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.739 [INFO][10315] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.739 [INFO][10315] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.758 [INFO][10322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.759 [INFO][10322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.759 [INFO][10322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.762 [WARNING][10322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.763 [INFO][10322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" HandleID="k8s-pod-network.131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Workload="localhost-k8s-calico--apiserver--6d44674bc4--w2f48-eth0" Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.763 [INFO][10322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.766436 env[1363]: 2025-07-10 01:39:09.765 [INFO][10315] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3" Jul 10 01:39:09.775868 env[1363]: time="2025-07-10T01:39:09.766456049Z" level=info msg="TearDown network for sandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" successfully" Jul 10 01:39:09.787402 env[1363]: time="2025-07-10T01:39:09.787374057Z" level=info msg="RemovePodSandbox \"131d31244e534a733a530103ddea3666cd2eb72fb0933d89a095d6d044cd52d3\" returns successfully" Jul 10 01:39:09.787852 env[1363]: time="2025-07-10T01:39:09.787837107Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.810 [WARNING][10336] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.810 [INFO][10336] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.810 [INFO][10336] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" iface="eth0" netns="" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.810 [INFO][10336] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.810 [INFO][10336] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.824 [INFO][10343] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.824 [INFO][10343] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.825 [INFO][10343] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.829 [WARNING][10343] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.829 [INFO][10343] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.830 [INFO][10343] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.833358 env[1363]: 2025-07-10 01:39:09.832 [INFO][10336] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.833785 env[1363]: time="2025-07-10T01:39:09.833761979Z" level=info msg="TearDown network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" successfully" Jul 10 01:39:09.833851 env[1363]: time="2025-07-10T01:39:09.833837814Z" level=info msg="StopPodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" returns successfully" Jul 10 01:39:09.834218 env[1363]: time="2025-07-10T01:39:09.834205755Z" level=info msg="RemovePodSandbox for \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:39:09.834304 env[1363]: time="2025-07-10T01:39:09.834279971Z" level=info msg="Forcibly stopping sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\"" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.855 [WARNING][10357] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.855 [INFO][10357] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.855 [INFO][10357] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" iface="eth0" netns="" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.855 [INFO][10357] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.855 [INFO][10357] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.869 [INFO][10364] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.869 [INFO][10364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.869 [INFO][10364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.873 [WARNING][10364] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.873 [INFO][10364] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" HandleID="k8s-pod-network.5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Workload="localhost-k8s-coredns--7c65d6cfc9--4k5ld-eth0" Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.873 [INFO][10364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 10 01:39:09.876465 env[1363]: 2025-07-10 01:39:09.875 [INFO][10357] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319" Jul 10 01:39:09.877367 env[1363]: time="2025-07-10T01:39:09.876767662Z" level=info msg="TearDown network for sandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" successfully" Jul 10 01:39:09.878395 env[1363]: time="2025-07-10T01:39:09.878381015Z" level=info msg="RemovePodSandbox \"5e9aedbb1d15e1d7bd8b79126017424346117b11833100260ee33d8092673319\" returns successfully" Jul 10 01:39:10.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-139.178.70.102:22-139.178.68.195:38584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:10.125971 systemd[1]: Started sshd@41-139.178.70.102:22-139.178.68.195:38584.service. Jul 10 01:39:10.130301 kernel: kauditd_printk_skb: 537 callbacks suppressed Jul 10 01:39:10.132152 kernel: audit: type=1130 audit(1752111550.125:882): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-139.178.70.102:22-139.178.68.195:38584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:10.198000 audit[10372]: USER_ACCT pid=10372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.206004 kernel: audit: type=1101 audit(1752111550.198:883): pid=10372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.206038 kernel: audit: type=1103 audit(1752111550.202:884): pid=10372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.206053 kernel: audit: type=1006 audit(1752111550.202:885): pid=10372 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=43 res=1 Jul 10 01:39:10.207841 kernel: audit: type=1300 audit(1752111550.202:885): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc0de320 a2=3 a3=0 items=0 ppid=1 pid=10372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:10.202000 audit[10372]: CRED_ACQ pid=10372 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.202000 audit[10372]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffecc0de320 a2=3 a3=0 items=0 ppid=1 pid=10372 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=43 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:10.207990 sshd[10372]: Accepted publickey for core from 139.178.68.195 port 38584 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:10.202000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:10.212158 kernel: audit: type=1327 audit(1752111550.202:885): proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:10.212322 sshd[10372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:10.220391 systemd[1]: Started session-43.scope. Jul 10 01:39:10.221026 systemd-logind[1351]: New session 43 of user core. Jul 10 01:39:10.223000 audit[10372]: USER_START pid=10372 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.228988 kernel: audit: type=1105 audit(1752111550.223:886): pid=10372 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.229031 kernel: audit: type=1103 audit(1752111550.228:887): pid=10375 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.228000 audit[10375]: CRED_ACQ pid=10375 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.756920 sshd[10372]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:10.756000 audit[10372]: USER_END pid=10372 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.761000 audit[10372]: CRED_DISP pid=10372 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.764408 kernel: audit: type=1106 audit(1752111550.756:888): pid=10372 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.764444 kernel: audit: type=1104 audit(1752111550.761:889): pid=10372 uid=0 auid=500 ses=43 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:10.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@41-139.178.70.102:22-139.178.68.195:38584 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:10.765380 systemd[1]: sshd@41-139.178.70.102:22-139.178.68.195:38584.service: Deactivated successfully. Jul 10 01:39:10.766227 systemd[1]: session-43.scope: Deactivated successfully. Jul 10 01:39:10.766577 systemd-logind[1351]: Session 43 logged out. Waiting for processes to exit. Jul 10 01:39:10.767258 systemd-logind[1351]: Removed session 43. Jul 10 01:39:15.759448 systemd[1]: Started sshd@42-139.178.70.102:22-139.178.68.195:38600.service. Jul 10 01:39:15.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-139.178.70.102:22-139.178.68.195:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:15.765017 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 01:39:15.765076 kernel: audit: type=1130 audit(1752111555.763:891): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-139.178.70.102:22-139.178.68.195:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:15.834000 audit[10395]: USER_ACCT pid=10395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:15.839568 sshd[10395]: Accepted publickey for core from 139.178.68.195 port 38600 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:15.839832 kernel: audit: type=1101 audit(1752111555.834:892): pid=10395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:15.844068 kernel: audit: type=1103 audit(1752111555.839:893): pid=10395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:15.839000 audit[10395]: CRED_ACQ pid=10395 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:15.846661 kernel: audit: type=1006 audit(1752111555.839:894): pid=10395 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=44 res=1 Jul 10 01:39:15.851057 kernel: audit: type=1300 audit(1752111555.839:894): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3a01bdc0 a2=3 a3=0 items=0 ppid=1 pid=10395 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:15.851092 kernel: audit: type=1327 audit(1752111555.839:894): proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:15.839000 audit[10395]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3a01bdc0 a2=3 a3=0 items=0 ppid=1 pid=10395 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=44 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:15.839000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:15.852710 sshd[10395]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:15.859466 systemd[1]: Started session-44.scope. Jul 10 01:39:15.859769 systemd-logind[1351]: New session 44 of user core. Jul 10 01:39:15.863000 audit[10395]: USER_START pid=10395 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:15.867000 audit[10398]: CRED_ACQ pid=10398 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:15.871557 kernel: audit: type=1105 audit(1752111555.863:895): pid=10395 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:15.871592 kernel: audit: type=1103 audit(1752111555.867:896): pid=10398 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.228798 sshd[10395]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:16.230614 systemd[1]: Started sshd@43-139.178.70.102:22-139.178.68.195:38616.service. Jul 10 01:39:16.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-139.178.70.102:22-139.178.68.195:38616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:16.233000 audit[10395]: USER_END pid=10395 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.234624 systemd-logind[1351]: Session 44 logged out. Waiting for processes to exit. Jul 10 01:39:16.238929 kernel: audit: type=1130 audit(1752111556.231:897): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-139.178.70.102:22-139.178.68.195:38616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:16.238957 kernel: audit: type=1106 audit(1752111556.233:898): pid=10395 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.235405 systemd[1]: sshd@42-139.178.70.102:22-139.178.68.195:38600.service: Deactivated successfully. Jul 10 01:39:16.235858 systemd[1]: session-44.scope: Deactivated successfully. Jul 10 01:39:16.233000 audit[10395]: CRED_DISP pid=10395 uid=0 auid=500 ses=44 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@42-139.178.70.102:22-139.178.68.195:38600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:16.236775 systemd-logind[1351]: Removed session 44. Jul 10 01:39:16.277000 audit[10405]: USER_ACCT pid=10405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.278667 sshd[10405]: Accepted publickey for core from 139.178.68.195 port 38616 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:16.279000 audit[10405]: CRED_ACQ pid=10405 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.279000 audit[10405]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7fff3a7d68b0 a2=3 a3=0 items=0 ppid=1 pid=10405 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=45 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:16.279000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:16.280263 sshd[10405]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:16.284049 systemd[1]: Started session-45.scope. Jul 10 01:39:16.284233 systemd-logind[1351]: New session 45 of user core. Jul 10 01:39:16.288000 audit[10405]: USER_START pid=10405 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.289000 audit[10410]: CRED_ACQ pid=10410 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.898988 sshd[10405]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:16.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-139.178.70.102:22-139.178.68.195:38624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:16.901205 systemd[1]: Started sshd@44-139.178.70.102:22-139.178.68.195:38624.service. Jul 10 01:39:16.902000 audit[10405]: USER_END pid=10405 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.902000 audit[10405]: CRED_DISP pid=10405 uid=0 auid=500 ses=45 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.906034 systemd[1]: sshd@43-139.178.70.102:22-139.178.68.195:38616.service: Deactivated successfully. Jul 10 01:39:16.905000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@43-139.178.70.102:22-139.178.68.195:38616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:16.906680 systemd[1]: session-45.scope: Deactivated successfully. Jul 10 01:39:16.907283 systemd-logind[1351]: Session 45 logged out. Waiting for processes to exit. Jul 10 01:39:16.908854 systemd-logind[1351]: Removed session 45. Jul 10 01:39:16.959000 audit[10416]: USER_ACCT pid=10416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.960086 sshd[10416]: Accepted publickey for core from 139.178.68.195 port 38624 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:16.960000 audit[10416]: CRED_ACQ pid=10416 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.960000 audit[10416]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffedb6ef150 a2=3 a3=0 items=0 ppid=1 pid=10416 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=46 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:16.960000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:16.961152 sshd[10416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:16.964132 systemd[1]: Started session-46.scope. Jul 10 01:39:16.964436 systemd-logind[1351]: New session 46 of user core. Jul 10 01:39:16.967000 audit[10416]: USER_START pid=10416 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:16.968000 audit[10421]: CRED_ACQ pid=10421 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:18.688458 systemd[1]: Started sshd@45-139.178.70.102:22-139.178.68.195:60212.service. Jul 10 01:39:18.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-139.178.70.102:22-139.178.68.195:60212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:18.706814 sshd[10416]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:18.722000 audit[10416]: USER_END pid=10416 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:18.723000 audit[10416]: CRED_DISP pid=10416 uid=0 auid=500 ses=46 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:18.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@44-139.178.70.102:22-139.178.68.195:38624 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:18.727937 systemd[1]: sshd@44-139.178.70.102:22-139.178.68.195:38624.service: Deactivated successfully. Jul 10 01:39:18.728539 systemd[1]: session-46.scope: Deactivated successfully. Jul 10 01:39:18.729118 systemd-logind[1351]: Session 46 logged out. Waiting for processes to exit. Jul 10 01:39:18.730975 systemd-logind[1351]: Removed session 46. Jul 10 01:39:18.742000 audit[10436]: NETFILTER_CFG table=filter:175 family=2 entries=20 op=nft_register_rule pid=10436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:18.742000 audit[10436]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7ffe81b78140 a2=0 a3=7ffe81b7812c items=0 ppid=2398 pid=10436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:18.742000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:18.747000 audit[10436]: NETFILTER_CFG table=nat:176 family=2 entries=26 op=nft_register_rule pid=10436 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:18.747000 audit[10436]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7ffe81b78140 a2=0 a3=0 items=0 ppid=2398 pid=10436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:18.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:18.759000 audit[10438]: NETFILTER_CFG table=filter:177 family=2 entries=32 op=nft_register_rule pid=10438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:18.759000 audit[10438]: SYSCALL arch=c000003e syscall=46 success=yes exit=11944 a0=3 a1=7fff8f4f40e0 a2=0 a3=7fff8f4f40cc items=0 ppid=2398 pid=10438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:18.759000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:18.762000 audit[10438]: NETFILTER_CFG table=nat:178 family=2 entries=26 op=nft_register_rule pid=10438 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:18.762000 audit[10438]: SYSCALL arch=c000003e syscall=46 success=yes exit=8076 a0=3 a1=7fff8f4f40e0 a2=0 a3=0 items=0 ppid=2398 pid=10438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:18.762000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:18.771000 audit[10432]: USER_ACCT pid=10432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:18.773007 sshd[10432]: Accepted publickey for core from 139.178.68.195 port 60212 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:18.772000 audit[10432]: CRED_ACQ pid=10432 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:18.772000 audit[10432]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd26be7c70 a2=3 a3=0 items=0 ppid=1 pid=10432 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=47 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:18.772000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:18.775820 sshd[10432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:18.780771 systemd[1]: Started session-47.scope. Jul 10 01:39:18.781591 systemd-logind[1351]: New session 47 of user core. Jul 10 01:39:18.790000 audit[10432]: USER_START pid=10432 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:18.791000 audit[10440]: CRED_ACQ pid=10440 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.554535 systemd[1]: Started sshd@46-139.178.70.102:22-139.178.68.195:60228.service. Jul 10 01:39:19.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-139.178.70.102:22-139.178.68.195:60228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:19.556407 sshd[10432]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:19.557000 audit[10432]: USER_END pid=10432 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.558000 audit[10432]: CRED_DISP pid=10432 uid=0 auid=500 ses=47 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@45-139.178.70.102:22-139.178.68.195:60212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:19.559881 systemd[1]: sshd@45-139.178.70.102:22-139.178.68.195:60212.service: Deactivated successfully. Jul 10 01:39:19.560420 systemd[1]: session-47.scope: Deactivated successfully. Jul 10 01:39:19.561125 systemd-logind[1351]: Session 47 logged out. Waiting for processes to exit. Jul 10 01:39:19.561790 systemd-logind[1351]: Removed session 47. Jul 10 01:39:19.619000 audit[10453]: USER_ACCT pid=10453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.620593 sshd[10453]: Accepted publickey for core from 139.178.68.195 port 60228 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:19.620000 audit[10453]: CRED_ACQ pid=10453 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.621000 audit[10453]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffea0f8ba00 a2=3 a3=0 items=0 ppid=1 pid=10453 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=48 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:19.621000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:19.622245 sshd[10453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:19.626172 systemd[1]: Started session-48.scope. Jul 10 01:39:19.626414 systemd-logind[1351]: New session 48 of user core. Jul 10 01:39:19.629000 audit[10453]: USER_START pid=10453 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.630000 audit[10458]: CRED_ACQ pid=10458 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.741790 sshd[10453]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:19.742000 audit[10453]: USER_END pid=10453 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.742000 audit[10453]: CRED_DISP pid=10453 uid=0 auid=500 ses=48 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:19.743980 systemd[1]: sshd@46-139.178.70.102:22-139.178.68.195:60228.service: Deactivated successfully. Jul 10 01:39:19.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@46-139.178.70.102:22-139.178.68.195:60228 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:19.744851 systemd[1]: session-48.scope: Deactivated successfully. Jul 10 01:39:19.745162 systemd-logind[1351]: Session 48 logged out. Waiting for processes to exit. Jul 10 01:39:19.745698 systemd-logind[1351]: Removed session 48. Jul 10 01:39:20.123384 systemd[1]: Started sshd@47-139.178.70.102:22-139.59.71.224:51696.service. Jul 10 01:39:20.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-139.178.70.102:22-139.59.71.224:51696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:21.154054 sshd[10468]: Invalid user from 139.59.71.224 port 51696 Jul 10 01:39:23.369665 kernel: kauditd_printk_skb: 58 callbacks suppressed Jul 10 01:39:23.383624 kernel: audit: type=1325 audit(1752111563.366:941): table=filter:179 family=2 entries=20 op=nft_register_rule pid=10471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:23.383700 kernel: audit: type=1300 audit(1752111563.366:941): arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffc73ccf0c0 a2=0 a3=7ffc73ccf0ac items=0 ppid=2398 pid=10471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:23.384467 kernel: audit: type=1327 audit(1752111563.366:941): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:23.384496 kernel: audit: type=1325 audit(1752111563.373:942): table=nat:180 family=2 entries=110 op=nft_register_chain pid=10471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:23.384546 kernel: audit: type=1300 audit(1752111563.373:942): arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffc73ccf0c0 a2=0 a3=7ffc73ccf0ac items=0 ppid=2398 pid=10471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:23.384564 kernel: audit: type=1327 audit(1752111563.373:942): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:23.366000 audit[10471]: NETFILTER_CFG table=filter:179 family=2 entries=20 op=nft_register_rule pid=10471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:23.366000 audit[10471]: SYSCALL arch=c000003e syscall=46 success=yes exit=3016 a0=3 a1=7ffc73ccf0c0 a2=0 a3=7ffc73ccf0ac items=0 ppid=2398 pid=10471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:23.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:23.373000 audit[10471]: NETFILTER_CFG table=nat:180 family=2 entries=110 op=nft_register_chain pid=10471 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jul 10 01:39:23.373000 audit[10471]: SYSCALL arch=c000003e syscall=46 success=yes exit=50988 a0=3 a1=7ffc73ccf0c0 a2=0 a3=7ffc73ccf0ac items=0 ppid=2398 pid=10471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:23.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jul 10 01:39:24.744687 systemd[1]: Started sshd@48-139.178.70.102:22-139.178.68.195:60236.service. Jul 10 01:39:24.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-139.178.70.102:22-139.178.68.195:60236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:24.749679 kernel: audit: type=1130 audit(1752111564.744:943): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-139.178.70.102:22-139.178.68.195:60236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:24.813000 audit[10473]: USER_ACCT pid=10473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.817507 sshd[10473]: Accepted publickey for core from 139.178.68.195 port 60236 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:24.818688 kernel: audit: type=1101 audit(1752111564.813:944): pid=10473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.818000 audit[10473]: CRED_ACQ pid=10473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.825987 kernel: audit: type=1103 audit(1752111564.818:945): pid=10473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.827746 kernel: audit: type=1006 audit(1752111564.818:946): pid=10473 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=49 res=1 Jul 10 01:39:24.818000 audit[10473]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffe42d9a170 a2=3 a3=0 items=0 ppid=1 pid=10473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=49 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:24.818000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:24.826332 sshd[10473]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:24.830636 systemd-logind[1351]: New session 49 of user core. Jul 10 01:39:24.831022 systemd[1]: Started session-49.scope. Jul 10 01:39:24.834000 audit[10473]: USER_START pid=10473 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.835000 audit[10476]: CRED_ACQ pid=10476 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.945005 sshd[10473]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:24.945000 audit[10473]: USER_END pid=10473 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.945000 audit[10473]: CRED_DISP pid=10473 uid=0 auid=500 ses=49 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:24.946821 systemd-logind[1351]: Session 49 logged out. Waiting for processes to exit. Jul 10 01:39:24.946916 systemd[1]: sshd@48-139.178.70.102:22-139.178.68.195:60236.service: Deactivated successfully. Jul 10 01:39:24.947422 systemd[1]: session-49.scope: Deactivated successfully. Jul 10 01:39:24.947767 systemd-logind[1351]: Removed session 49. Jul 10 01:39:24.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@48-139.178.70.102:22-139.178.68.195:60236 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:28.129992 sshd[10468]: Connection closed by invalid user 139.59.71.224 port 51696 [preauth] Jul 10 01:39:28.131000 audit[10468]: USER_ERR pid=10468 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/sbin/sshd" hostname=139.59.71.224 addr=139.59.71.224 terminal=ssh res=failed' Jul 10 01:39:28.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@47-139.178.70.102:22-139.59.71.224:51696 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:28.139489 systemd[1]: sshd@47-139.178.70.102:22-139.59.71.224:51696.service: Deactivated successfully. Jul 10 01:39:29.947708 systemd[1]: Started sshd@49-139.178.70.102:22-139.178.68.195:58996.service. Jul 10 01:39:29.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-139.178.70.102:22-139.178.68.195:58996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:29.951345 kernel: kauditd_printk_skb: 9 callbacks suppressed Jul 10 01:39:29.951382 kernel: audit: type=1130 audit(1752111569.947:954): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-139.178.70.102:22-139.178.68.195:58996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:30.025000 audit[10519]: USER_ACCT pid=10519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.027680 sshd[10519]: Accepted publickey for core from 139.178.68.195 port 58996 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:30.030665 kernel: audit: type=1101 audit(1752111570.025:955): pid=10519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.030000 audit[10519]: CRED_ACQ pid=10519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.035675 kernel: audit: type=1103 audit(1752111570.030:956): pid=10519 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.035739 kernel: audit: type=1006 audit(1752111570.030:957): pid=10519 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=50 res=1 Jul 10 01:39:30.030000 audit[10519]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0d0e5180 a2=3 a3=0 items=0 ppid=1 pid=10519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:30.038548 sshd[10519]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:30.042652 kernel: audit: type=1300 audit(1752111570.030:957): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc0d0e5180 a2=3 a3=0 items=0 ppid=1 pid=10519 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=50 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:30.042944 kernel: audit: type=1327 audit(1752111570.030:957): proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:30.030000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:30.048798 systemd[1]: Started session-50.scope. Jul 10 01:39:30.049731 systemd-logind[1351]: New session 50 of user core. Jul 10 01:39:30.053000 audit[10519]: USER_START pid=10519 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.054000 audit[10522]: CRED_ACQ pid=10522 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.060722 kernel: audit: type=1105 audit(1752111570.053:958): pid=10519 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.060771 kernel: audit: type=1103 audit(1752111570.054:959): pid=10522 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.189277 sshd[10519]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:30.193000 audit[10519]: USER_END pid=10519 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.197689 kernel: audit: type=1106 audit(1752111570.193:960): pid=10519 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.197000 audit[10519]: CRED_DISP pid=10519 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.202347 kernel: audit: type=1104 audit(1752111570.197:961): pid=10519 uid=0 auid=500 ses=50 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:30.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@49-139.178.70.102:22-139.178.68.195:58996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:30.201829 systemd[1]: sshd@49-139.178.70.102:22-139.178.68.195:58996.service: Deactivated successfully. Jul 10 01:39:30.202770 systemd[1]: session-50.scope: Deactivated successfully. Jul 10 01:39:30.203156 systemd-logind[1351]: Session 50 logged out. Waiting for processes to exit. Jul 10 01:39:30.204015 systemd-logind[1351]: Removed session 50. Jul 10 01:39:35.189611 systemd[1]: Started sshd@50-139.178.70.102:22-139.178.68.195:59008.service. Jul 10 01:39:35.195303 kernel: kauditd_printk_skb: 1 callbacks suppressed Jul 10 01:39:35.198735 kernel: audit: type=1130 audit(1752111575.189:963): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-139.178.70.102:22-139.178.68.195:59008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:35.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-139.178.70.102:22-139.178.68.195:59008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:35.292047 sshd[10533]: Accepted publickey for core from 139.178.68.195 port 59008 ssh2: RSA SHA256:NVpdRDPpwzjVTzi6orhe1cA9BvcYymCSReGH8myOy/Q Jul 10 01:39:35.291000 audit[10533]: USER_ACCT pid=10533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.299947 kernel: audit: type=1101 audit(1752111575.291:964): pid=10533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.299984 kernel: audit: type=1103 audit(1752111575.296:965): pid=10533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.296000 audit[10533]: CRED_ACQ pid=10533 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.296673 sshd[10533]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 10 01:39:35.306289 kernel: audit: type=1006 audit(1752111575.296:966): pid=10533 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=51 res=1 Jul 10 01:39:35.306326 kernel: audit: type=1300 audit(1752111575.296:966): arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2964a150 a2=3 a3=0 items=0 ppid=1 pid=10533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:35.296000 audit[10533]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd2964a150 a2=3 a3=0 items=0 ppid=1 pid=10533 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=51 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 10 01:39:35.308158 systemd-logind[1351]: New session 51 of user core. Jul 10 01:39:35.308506 systemd[1]: Started session-51.scope. Jul 10 01:39:35.296000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:35.325207 kernel: audit: type=1327 audit(1752111575.296:966): proctitle=737368643A20636F7265205B707269765D Jul 10 01:39:35.325763 kernel: audit: type=1105 audit(1752111575.311:967): pid=10533 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.311000 audit[10533]: USER_START pid=10533 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.312000 audit[10536]: CRED_ACQ pid=10536 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.336041 kernel: audit: type=1103 audit(1752111575.312:968): pid=10536 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.516873 sshd[10533]: pam_unix(sshd:session): session closed for user core Jul 10 01:39:35.524732 kernel: audit: type=1106 audit(1752111575.520:969): pid=10533 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.520000 audit[10533]: USER_END pid=10533 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.525000 audit[10533]: CRED_DISP pid=10533 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.529696 kernel: audit: type=1104 audit(1752111575.525:970): pid=10533 uid=0 auid=500 ses=51 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jul 10 01:39:35.530529 systemd[1]: sshd@50-139.178.70.102:22-139.178.68.195:59008.service: Deactivated successfully. Jul 10 01:39:35.531319 systemd[1]: session-51.scope: Deactivated successfully. Jul 10 01:39:35.531339 systemd-logind[1351]: Session 51 logged out. Waiting for processes to exit. Jul 10 01:39:35.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@50-139.178.70.102:22-139.178.68.195:59008 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 10 01:39:35.531960 systemd-logind[1351]: Removed session 51.