Mar 17 17:42:16.927763 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:09:25 -00 2025 Mar 17 17:42:16.927788 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:42:16.927800 kernel: BIOS-provided physical RAM map: Mar 17 17:42:16.927807 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:42:16.927813 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:42:16.927819 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:42:16.927827 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:42:16.927847 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:42:16.927853 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:42:16.927860 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:42:16.927867 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Mar 17 17:42:16.927880 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:42:16.927887 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:42:16.927894 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:42:16.927905 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:42:16.927913 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:42:16.927923 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:42:16.927930 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:42:16.927937 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:42:16.927945 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:42:16.927952 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:42:16.927959 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:42:16.927966 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:42:16.927973 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:42:16.927980 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:42:16.927987 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:42:16.927994 kernel: NX (Execute Disable) protection: active Mar 17 17:42:16.928004 kernel: APIC: Static calls initialized Mar 17 17:42:16.928011 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:42:16.928018 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Mar 17 17:42:16.928025 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:42:16.928032 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Mar 17 17:42:16.928039 kernel: extended physical RAM map: Mar 17 17:42:16.928046 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Mar 17 17:42:16.928054 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Mar 17 17:42:16.928061 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Mar 17 17:42:16.928068 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Mar 17 17:42:16.928075 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Mar 17 17:42:16.928082 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Mar 17 17:42:16.928092 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Mar 17 17:42:16.928103 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Mar 17 17:42:16.928110 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Mar 17 17:42:16.928120 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Mar 17 17:42:16.928127 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Mar 17 17:42:16.928135 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Mar 17 17:42:16.928145 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Mar 17 17:42:16.928152 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Mar 17 17:42:16.928160 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Mar 17 17:42:16.928167 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Mar 17 17:42:16.928174 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Mar 17 17:42:16.928182 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Mar 17 17:42:16.928189 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Mar 17 17:42:16.928197 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Mar 17 17:42:16.928204 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Mar 17 17:42:16.928214 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Mar 17 17:42:16.928221 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Mar 17 17:42:16.928229 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 17 17:42:16.928236 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:42:16.928246 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Mar 17 17:42:16.928271 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:42:16.928279 kernel: efi: EFI v2.7 by EDK II Mar 17 17:42:16.928287 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Mar 17 17:42:16.928294 kernel: random: crng init done Mar 17 17:42:16.928302 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Mar 17 17:42:16.928309 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Mar 17 17:42:16.928423 kernel: secureboot: Secure boot disabled Mar 17 17:42:16.928435 kernel: SMBIOS 2.8 present. Mar 17 17:42:16.928443 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 17 17:42:16.928450 kernel: Hypervisor detected: KVM Mar 17 17:42:16.928457 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:42:16.928465 kernel: kvm-clock: using sched offset of 3860179805 cycles Mar 17 17:42:16.928473 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:42:16.928481 kernel: tsc: Detected 2794.748 MHz processor Mar 17 17:42:16.928488 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:42:16.928496 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:42:16.928504 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Mar 17 17:42:16.928514 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 17 17:42:16.928522 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:42:16.928529 kernel: Using GB pages for direct mapping Mar 17 17:42:16.928537 kernel: ACPI: Early table checksum verification disabled Mar 17 17:42:16.928544 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Mar 17 17:42:16.928552 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 17 17:42:16.928560 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:42:16.928567 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:42:16.928575 kernel: ACPI: FACS 0x000000009CBDD000 000040 Mar 17 17:42:16.928586 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:42:16.928594 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:42:16.928601 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:42:16.928609 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:42:16.928616 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 17 17:42:16.928624 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Mar 17 17:42:16.928631 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Mar 17 17:42:16.928639 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Mar 17 17:42:16.928649 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Mar 17 17:42:16.928656 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Mar 17 17:42:16.928664 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Mar 17 17:42:16.928671 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Mar 17 17:42:16.928679 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Mar 17 17:42:16.928686 kernel: No NUMA configuration found Mar 17 17:42:16.928694 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Mar 17 17:42:16.928701 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Mar 17 17:42:16.928709 kernel: Zone ranges: Mar 17 17:42:16.928717 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:42:16.928727 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Mar 17 17:42:16.928737 kernel: Normal empty Mar 17 17:42:16.928744 kernel: Movable zone start for each node Mar 17 17:42:16.928752 kernel: Early memory node ranges Mar 17 17:42:16.928759 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Mar 17 17:42:16.928767 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Mar 17 17:42:16.928774 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Mar 17 17:42:16.928782 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Mar 17 17:42:16.928789 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Mar 17 17:42:16.928800 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Mar 17 17:42:16.928807 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Mar 17 17:42:16.928815 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Mar 17 17:42:16.928822 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Mar 17 17:42:16.928830 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:42:16.928838 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Mar 17 17:42:16.928853 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Mar 17 17:42:16.928863 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:42:16.928871 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Mar 17 17:42:16.928879 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Mar 17 17:42:16.928887 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 17 17:42:16.928897 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 17 17:42:16.928907 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Mar 17 17:42:16.928915 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:42:16.928922 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:42:16.928930 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:42:16.928938 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:42:16.928948 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:42:16.928956 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:42:16.928964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:42:16.928972 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:42:16.928980 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:42:16.928988 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:42:16.928996 kernel: TSC deadline timer available Mar 17 17:42:16.929003 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:42:16.929011 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:42:16.929021 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:42:16.929029 kernel: kvm-guest: setup PV sched yield Mar 17 17:42:16.929037 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 17 17:42:16.929045 kernel: Booting paravirtualized kernel on KVM Mar 17 17:42:16.929053 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:42:16.929061 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:42:16.929069 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:42:16.929077 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:42:16.929084 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:42:16.929094 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:42:16.929102 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:42:16.929111 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:42:16.929119 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:42:16.929130 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:42:16.929138 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:42:16.929146 kernel: Fallback order for Node 0: 0 Mar 17 17:42:16.929154 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Mar 17 17:42:16.929164 kernel: Policy zone: DMA32 Mar 17 17:42:16.929172 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:42:16.929180 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2303K rwdata, 22860K rodata, 43476K init, 1596K bss, 177824K reserved, 0K cma-reserved) Mar 17 17:42:16.929188 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:42:16.929196 kernel: ftrace: allocating 37910 entries in 149 pages Mar 17 17:42:16.929204 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:42:16.929212 kernel: Dynamic Preempt: voluntary Mar 17 17:42:16.929220 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:42:16.929228 kernel: rcu: RCU event tracing is enabled. Mar 17 17:42:16.929238 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:42:16.929246 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:42:16.929264 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:42:16.929272 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:42:16.929280 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:42:16.929287 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:42:16.929295 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:42:16.929303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:42:16.929311 kernel: Console: colour dummy device 80x25 Mar 17 17:42:16.929330 kernel: printk: console [ttyS0] enabled Mar 17 17:42:16.929354 kernel: ACPI: Core revision 20230628 Mar 17 17:42:16.929362 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:42:16.929370 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:42:16.929378 kernel: x2apic enabled Mar 17 17:42:16.929385 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:42:16.929397 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:42:16.929405 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:42:16.929413 kernel: kvm-guest: setup PV IPIs Mar 17 17:42:16.929421 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:42:16.929431 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:42:16.929439 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 17 17:42:16.929447 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:42:16.929455 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:42:16.929463 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:42:16.929471 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:42:16.929479 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:42:16.929487 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:42:16.929495 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:42:16.929505 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:42:16.929513 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:42:16.929521 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:42:16.929529 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:42:16.929537 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:42:16.929548 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:42:16.929556 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:42:16.929564 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:42:16.929575 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:42:16.929582 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:42:16.929590 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:42:16.929598 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:42:16.929606 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:42:16.929614 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:42:16.929622 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:42:16.929630 kernel: landlock: Up and running. Mar 17 17:42:16.929638 kernel: SELinux: Initializing. Mar 17 17:42:16.929648 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:42:16.929656 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:42:16.929664 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:42:16.929672 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:42:16.929680 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:42:16.929688 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:42:16.929696 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:42:16.929704 kernel: ... version: 0 Mar 17 17:42:16.929714 kernel: ... bit width: 48 Mar 17 17:42:16.929722 kernel: ... generic registers: 6 Mar 17 17:42:16.929730 kernel: ... value mask: 0000ffffffffffff Mar 17 17:42:16.929738 kernel: ... max period: 00007fffffffffff Mar 17 17:42:16.929746 kernel: ... fixed-purpose events: 0 Mar 17 17:42:16.929753 kernel: ... event mask: 000000000000003f Mar 17 17:42:16.929761 kernel: signal: max sigframe size: 1776 Mar 17 17:42:16.929769 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:42:16.929777 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:42:16.929785 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:42:16.929795 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:42:16.929803 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:42:16.929811 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:42:16.929819 kernel: smpboot: Max logical packages: 1 Mar 17 17:42:16.929827 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 17 17:42:16.929835 kernel: devtmpfs: initialized Mar 17 17:42:16.929842 kernel: x86/mm: Memory block size: 128MB Mar 17 17:42:16.929850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Mar 17 17:42:16.929858 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Mar 17 17:42:16.929869 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Mar 17 17:42:16.929877 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Mar 17 17:42:16.929885 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Mar 17 17:42:16.929893 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Mar 17 17:42:16.929901 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:42:16.929909 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:42:16.929917 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:42:16.929924 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:42:16.929933 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:42:16.929943 kernel: audit: type=2000 audit(1742233335.754:1): state=initialized audit_enabled=0 res=1 Mar 17 17:42:16.929951 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:42:16.929959 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:42:16.929967 kernel: cpuidle: using governor menu Mar 17 17:42:16.929975 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:42:16.929982 kernel: dca service started, version 1.12.1 Mar 17 17:42:16.929990 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 17 17:42:16.929998 kernel: PCI: Using configuration type 1 for base access Mar 17 17:42:16.930006 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:42:16.930017 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:42:16.930025 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:42:16.930033 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:42:16.930041 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:42:16.930049 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:42:16.930056 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:42:16.930064 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:42:16.930072 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:42:16.930080 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:42:16.930090 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:42:16.930098 kernel: ACPI: Interpreter enabled Mar 17 17:42:16.930106 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:42:16.930114 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:42:16.930122 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:42:16.930129 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:42:16.930137 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:42:16.930145 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:42:16.930393 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:42:16.930546 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:42:16.930677 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:42:16.930688 kernel: PCI host bridge to bus 0000:00 Mar 17 17:42:16.930835 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:42:16.930957 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:42:16.931076 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:42:16.931200 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 17 17:42:16.931347 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 17 17:42:16.931486 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:42:16.931607 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:42:16.931774 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:42:16.931925 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:42:16.932056 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 17 17:42:16.932192 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 17 17:42:16.932352 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 17 17:42:16.932487 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 17 17:42:16.932617 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:42:16.932777 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:42:16.932918 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 17 17:42:16.933058 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 17 17:42:16.933188 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 17 17:42:16.933382 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:42:16.933544 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 17 17:42:16.933677 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 17 17:42:16.933809 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 17 17:42:16.933984 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:42:16.934123 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 17 17:42:16.934279 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 17 17:42:16.934464 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 17 17:42:16.934595 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 17 17:42:16.934740 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:42:16.934869 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:42:16.935093 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:42:16.935235 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 17 17:42:16.935393 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 17 17:42:16.935567 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:42:16.935700 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 17 17:42:16.935711 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:42:16.935719 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:42:16.935728 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:42:16.935740 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:42:16.935748 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:42:16.935756 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:42:16.935764 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:42:16.935772 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:42:16.935780 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:42:16.935788 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:42:16.935796 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:42:16.935804 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:42:16.935815 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:42:16.935823 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:42:16.935831 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:42:16.935839 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:42:16.935847 kernel: iommu: Default domain type: Translated Mar 17 17:42:16.935854 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:42:16.935863 kernel: efivars: Registered efivars operations Mar 17 17:42:16.935871 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:42:16.935879 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:42:16.935889 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Mar 17 17:42:16.935897 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Mar 17 17:42:16.935905 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Mar 17 17:42:16.935913 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Mar 17 17:42:16.935921 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Mar 17 17:42:16.935929 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Mar 17 17:42:16.935937 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Mar 17 17:42:16.935945 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Mar 17 17:42:16.936076 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:42:16.936210 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:42:16.936414 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:42:16.936427 kernel: vgaarb: loaded Mar 17 17:42:16.936436 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:42:16.936444 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:42:16.936452 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:42:16.936460 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:42:16.936468 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:42:16.936477 kernel: pnp: PnP ACPI init Mar 17 17:42:16.936656 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 17 17:42:16.936669 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:42:16.936678 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:42:16.936686 kernel: NET: Registered PF_INET protocol family Mar 17 17:42:16.936715 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:42:16.936725 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:42:16.936734 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:42:16.936742 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:42:16.936753 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:42:16.936761 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:42:16.936769 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:42:16.936810 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:42:16.936818 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:42:16.936826 kernel: NET: Registered PF_XDP protocol family Mar 17 17:42:16.936968 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 17 17:42:16.937106 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 17 17:42:16.939241 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:42:16.939480 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:42:16.939646 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:42:16.939805 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 17 17:42:16.939961 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 17 17:42:16.940115 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 17 17:42:16.940133 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:42:16.940145 kernel: Initialise system trusted keyrings Mar 17 17:42:16.940168 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:42:16.940180 kernel: Key type asymmetric registered Mar 17 17:42:16.940191 kernel: Asymmetric key parser 'x509' registered Mar 17 17:42:16.940202 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:42:16.940214 kernel: io scheduler mq-deadline registered Mar 17 17:42:16.940225 kernel: io scheduler kyber registered Mar 17 17:42:16.940237 kernel: io scheduler bfq registered Mar 17 17:42:16.940249 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:42:16.940283 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:42:16.940327 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:42:16.940348 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:42:16.940359 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:42:16.940371 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:42:16.940383 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:42:16.940395 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:42:16.940410 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:42:16.940422 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:42:16.940627 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:42:16.940789 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:42:16.940949 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:42:16 UTC (1742233336) Mar 17 17:42:16.941108 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:42:16.941125 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:42:16.941138 kernel: efifb: probing for efifb Mar 17 17:42:16.941156 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 17 17:42:16.941167 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 17 17:42:16.941179 kernel: efifb: scrolling: redraw Mar 17 17:42:16.941191 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 17 17:42:16.941203 kernel: Console: switching to colour frame buffer device 160x50 Mar 17 17:42:16.941216 kernel: fb0: EFI VGA frame buffer device Mar 17 17:42:16.941228 kernel: pstore: Using crash dump compression: deflate Mar 17 17:42:16.941241 kernel: pstore: Registered efi_pstore as persistent store backend Mar 17 17:42:16.941262 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:42:16.941280 kernel: Segment Routing with IPv6 Mar 17 17:42:16.941292 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:42:16.941304 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:42:16.941353 kernel: Key type dns_resolver registered Mar 17 17:42:16.941365 kernel: IPI shorthand broadcast: enabled Mar 17 17:42:16.941376 kernel: sched_clock: Marking stable (1249003083, 161196253)->(1451401856, -41202520) Mar 17 17:42:16.941388 kernel: registered taskstats version 1 Mar 17 17:42:16.941399 kernel: Loading compiled-in X.509 certificates Mar 17 17:42:16.941411 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 2d438fc13e28f87f3f580874887bade2e2b0c7dd' Mar 17 17:42:16.941427 kernel: Key type .fscrypt registered Mar 17 17:42:16.941438 kernel: Key type fscrypt-provisioning registered Mar 17 17:42:16.941449 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:42:16.941461 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:42:16.941472 kernel: ima: No architecture policies found Mar 17 17:42:16.941483 kernel: clk: Disabling unused clocks Mar 17 17:42:16.941494 kernel: Freeing unused kernel image (initmem) memory: 43476K Mar 17 17:42:16.941506 kernel: Write protecting the kernel read-only data: 38912k Mar 17 17:42:16.941517 kernel: Freeing unused kernel image (rodata/data gap) memory: 1716K Mar 17 17:42:16.941567 kernel: Run /init as init process Mar 17 17:42:16.941597 kernel: with arguments: Mar 17 17:42:16.941609 kernel: /init Mar 17 17:42:16.941621 kernel: with environment: Mar 17 17:42:16.941632 kernel: HOME=/ Mar 17 17:42:16.941644 kernel: TERM=linux Mar 17 17:42:16.941655 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:42:16.941669 systemd[1]: Successfully made /usr/ read-only. Mar 17 17:42:16.941698 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:42:16.941711 systemd[1]: Detected virtualization kvm. Mar 17 17:42:16.941723 systemd[1]: Detected architecture x86-64. Mar 17 17:42:16.941735 systemd[1]: Running in initrd. Mar 17 17:42:16.941747 systemd[1]: No hostname configured, using default hostname. Mar 17 17:42:16.941759 systemd[1]: Hostname set to . Mar 17 17:42:16.941771 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:42:16.941782 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:42:16.941798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:42:16.941810 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:42:16.941824 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:42:16.941836 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:42:16.941849 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:42:16.941862 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:42:16.941878 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:42:16.941895 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:42:16.941908 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:42:16.941922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:42:16.941936 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:42:16.941948 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:42:16.941961 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:42:16.941973 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:42:16.941986 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:42:16.942003 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:42:16.942017 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:42:16.942029 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 17 17:42:16.942043 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:42:16.942056 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:42:16.942069 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:42:16.942081 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:42:16.942094 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:42:16.942107 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:42:16.942124 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:42:16.942137 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:42:16.942150 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:42:16.942163 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:42:16.942175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:42:16.942188 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:42:16.942201 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:42:16.942219 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:42:16.942233 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:42:16.942298 systemd-journald[194]: Collecting audit messages is disabled. Mar 17 17:42:16.942357 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:42:16.942371 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:42:16.942383 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:42:16.942395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:42:16.942407 systemd-journald[194]: Journal started Mar 17 17:42:16.942437 systemd-journald[194]: Runtime Journal (/run/log/journal/7764b01272004fc9bebfbdf7a7c2a527) is 6M, max 48.2M, 42.2M free. Mar 17 17:42:16.940350 systemd-modules-load[195]: Inserted module 'overlay' Mar 17 17:42:16.946336 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:42:16.949596 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:42:16.953305 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:42:16.955149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:42:16.957867 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:42:16.968945 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:42:16.976617 dracut-cmdline[222]: dracut-dracut-053 Mar 17 17:42:16.980765 dracut-cmdline[222]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2a4a0f64c0160ed10b339be09fdc9d7e265b13f78aefc87616e79bf13c00bb1c Mar 17 17:42:16.986048 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:42:16.988606 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 17 17:42:16.989543 kernel: Bridge firewalling registered Mar 17 17:42:16.990754 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:42:16.998574 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:42:17.010225 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:42:17.021562 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:42:17.060782 systemd-resolved[264]: Positive Trust Anchors: Mar 17 17:42:17.060804 systemd-resolved[264]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:42:17.060842 systemd-resolved[264]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:42:17.064021 systemd-resolved[264]: Defaulting to hostname 'linux'. Mar 17 17:42:17.069549 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:42:17.072539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:42:17.079341 kernel: SCSI subsystem initialized Mar 17 17:42:17.088342 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:42:17.099349 kernel: iscsi: registered transport (tcp) Mar 17 17:42:17.121367 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:42:17.121452 kernel: QLogic iSCSI HBA Driver Mar 17 17:42:17.273955 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:42:17.284541 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:42:17.314403 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:42:17.314502 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:42:17.314516 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:42:17.358371 kernel: raid6: avx2x4 gen() 29519 MB/s Mar 17 17:42:17.375349 kernel: raid6: avx2x2 gen() 31283 MB/s Mar 17 17:42:17.392442 kernel: raid6: avx2x1 gen() 25858 MB/s Mar 17 17:42:17.392474 kernel: raid6: using algorithm avx2x2 gen() 31283 MB/s Mar 17 17:42:17.410456 kernel: raid6: .... xor() 19863 MB/s, rmw enabled Mar 17 17:42:17.410484 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:42:17.433349 kernel: xor: automatically using best checksumming function avx Mar 17 17:42:17.585356 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:42:17.601040 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:42:17.612651 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:42:17.629370 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 17 17:42:17.636038 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:42:17.649549 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:42:17.665962 dracut-pre-trigger[421]: rd.md=0: removing MD RAID activation Mar 17 17:42:17.707499 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:42:17.721853 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:42:17.790779 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:42:17.801699 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:42:17.816685 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:42:17.818865 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:42:17.821179 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:42:17.822768 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:42:17.832356 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:42:17.863484 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:42:17.863685 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:42:17.863700 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:42:17.863722 kernel: GPT:9289727 != 19775487 Mar 17 17:42:17.863734 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:42:17.863746 kernel: GPT:9289727 != 19775487 Mar 17 17:42:17.863758 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:42:17.863769 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:42:17.863782 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:42:17.836483 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:42:17.847156 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:42:17.869345 kernel: AES CTR mode by8 optimization enabled Mar 17 17:42:17.869383 kernel: libata version 3.00 loaded. Mar 17 17:42:17.878340 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:42:17.912472 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:42:17.912489 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:42:17.912652 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:42:17.912811 kernel: scsi host0: ahci Mar 17 17:42:17.912978 kernel: scsi host1: ahci Mar 17 17:42:17.913136 kernel: scsi host2: ahci Mar 17 17:42:17.913342 kernel: scsi host3: ahci Mar 17 17:42:17.913507 kernel: scsi host4: ahci Mar 17 17:42:17.913708 kernel: BTRFS: device fsid 16b3954e-2e86-4c7f-a948-d3d3817b1bdc devid 1 transid 42 /dev/vda3 scanned by (udev-worker) (461) Mar 17 17:42:17.913726 kernel: scsi host5: ahci Mar 17 17:42:17.913890 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 17 17:42:17.913903 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 17 17:42:17.913913 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (466) Mar 17 17:42:17.913924 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 17 17:42:17.913939 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 17 17:42:17.913950 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 17 17:42:17.913960 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 17 17:42:17.881257 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:42:17.881558 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:42:17.885073 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:42:17.886628 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:42:17.887685 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:42:17.890414 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:42:17.900966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:42:17.918917 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:42:17.939943 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:42:17.951621 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:42:17.967699 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:42:17.975972 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:42:17.977247 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:42:17.992487 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:42:17.994641 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:42:18.006611 disk-uuid[568]: Primary Header is updated. Mar 17 17:42:18.006611 disk-uuid[568]: Secondary Entries is updated. Mar 17 17:42:18.006611 disk-uuid[568]: Secondary Header is updated. Mar 17 17:42:18.011359 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:42:18.016378 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:42:18.019107 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:42:18.224355 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:42:18.224437 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:42:18.225338 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:42:18.225367 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:42:18.226808 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:42:18.226878 kernel: ata3.00: applying bridge limits Mar 17 17:42:18.228342 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:42:18.228355 kernel: ata3.00: configured for UDMA/100 Mar 17 17:42:18.229348 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:42:18.232343 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:42:18.282356 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:42:18.296260 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:42:18.296289 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:42:19.020084 disk-uuid[570]: The operation has completed successfully. Mar 17 17:42:19.021767 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:42:19.051722 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:42:19.051890 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:42:19.105616 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:42:19.113118 sh[592]: Success Mar 17 17:42:19.128352 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:42:19.172588 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:42:19.186301 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:42:19.188689 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:42:19.202129 kernel: BTRFS info (device dm-0): first mount of filesystem 16b3954e-2e86-4c7f-a948-d3d3817b1bdc Mar 17 17:42:19.202180 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:42:19.202199 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:42:19.203139 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:42:19.203935 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:42:19.208625 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:42:19.209459 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:42:19.219492 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:42:19.221393 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:42:19.237005 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:42:19.237072 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:42:19.237088 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:42:19.241404 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:42:19.251593 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:42:19.254347 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:42:19.270737 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:42:19.278701 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:42:19.328632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:42:19.334165 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:42:19.398059 systemd-networkd[776]: lo: Link UP Mar 17 17:42:19.398072 systemd-networkd[776]: lo: Gained carrier Mar 17 17:42:19.401512 systemd-networkd[776]: Enumeration completed Mar 17 17:42:19.404078 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:42:19.404177 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:42:19.404183 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:42:19.404244 systemd[1]: Reached target network.target - Network. Mar 17 17:42:19.407290 systemd-networkd[776]: eth0: Link UP Mar 17 17:42:19.407296 systemd-networkd[776]: eth0: Gained carrier Mar 17 17:42:19.407308 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:42:19.429469 systemd-networkd[776]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:42:19.464981 ignition[710]: Ignition 2.20.0 Mar 17 17:42:19.464996 ignition[710]: Stage: fetch-offline Mar 17 17:42:19.465053 ignition[710]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:42:19.465067 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:42:19.465198 ignition[710]: parsed url from cmdline: "" Mar 17 17:42:19.465203 ignition[710]: no config URL provided Mar 17 17:42:19.465211 ignition[710]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:42:19.465223 ignition[710]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:42:19.465259 ignition[710]: op(1): [started] loading QEMU firmware config module Mar 17 17:42:19.465265 ignition[710]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:42:19.472467 ignition[710]: op(1): [finished] loading QEMU firmware config module Mar 17 17:42:19.511842 ignition[710]: parsing config with SHA512: d0a2706f6f1e5084cc1956f187c1e11d531e1f20c9bd596f69b15ebffd32c9197abc6170a815828a3f3b1b166dab9de419e0cc97a6229e34fef0ed9fa67e8d90 Mar 17 17:42:19.517132 unknown[710]: fetched base config from "system" Mar 17 17:42:19.517150 unknown[710]: fetched user config from "qemu" Mar 17 17:42:19.517632 ignition[710]: fetch-offline: fetch-offline passed Mar 17 17:42:19.517719 ignition[710]: Ignition finished successfully Mar 17 17:42:19.520169 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:42:19.522043 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:42:19.533630 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:42:19.549808 ignition[789]: Ignition 2.20.0 Mar 17 17:42:19.549820 ignition[789]: Stage: kargs Mar 17 17:42:19.549974 ignition[789]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:42:19.549985 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:42:19.550763 ignition[789]: kargs: kargs passed Mar 17 17:42:19.550809 ignition[789]: Ignition finished successfully Mar 17 17:42:19.557663 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:42:19.572626 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:42:19.592452 ignition[796]: Ignition 2.20.0 Mar 17 17:42:19.592467 ignition[796]: Stage: disks Mar 17 17:42:19.592636 ignition[796]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:42:19.592647 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:42:19.596395 ignition[796]: disks: disks passed Mar 17 17:42:19.597031 ignition[796]: Ignition finished successfully Mar 17 17:42:19.600303 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:42:19.602499 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:42:19.604732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:42:19.607109 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:42:19.609070 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:42:19.611269 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:42:19.630742 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:42:19.644976 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:42:19.651896 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:42:20.244414 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:42:20.349371 kernel: EXT4-fs (vda9): mounted filesystem 21764504-a65e-45eb-84e1-376b55b62aba r/w with ordered data mode. Quota mode: none. Mar 17 17:42:20.349964 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:42:20.352139 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:42:20.375460 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:42:20.378212 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:42:20.380858 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:42:20.380946 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:42:20.383196 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:42:20.389556 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Mar 17 17:42:20.391022 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:42:20.395829 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:42:20.395859 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:42:20.395870 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:42:20.395881 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:42:20.397352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:42:20.409578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:42:20.450196 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:42:20.455599 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:42:20.461122 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:42:20.465917 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:42:20.649127 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:42:20.658481 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:42:20.661989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:42:20.668353 kernel: BTRFS info (device vda6): last unmount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:42:20.749041 ignition[927]: INFO : Ignition 2.20.0 Mar 17 17:42:20.749041 ignition[927]: INFO : Stage: mount Mar 17 17:42:20.751196 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:42:20.751196 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:42:20.750501 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:42:20.755811 ignition[927]: INFO : mount: mount passed Mar 17 17:42:20.756666 ignition[927]: INFO : Ignition finished successfully Mar 17 17:42:20.758997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:42:20.769435 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:42:21.202038 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:42:21.219613 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:42:21.228349 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Mar 17 17:42:21.228400 kernel: BTRFS info (device vda6): first mount of filesystem e64ce651-fa93-44de-893d-ff1e0bc9061f Mar 17 17:42:21.230859 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:42:21.230873 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:42:21.233347 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:42:21.235998 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:42:21.267062 ignition[958]: INFO : Ignition 2.20.0 Mar 17 17:42:21.267062 ignition[958]: INFO : Stage: files Mar 17 17:42:21.269089 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:42:21.269089 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:42:21.272376 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:42:21.274247 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:42:21.274247 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:42:21.278714 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:42:21.280284 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:42:21.282387 unknown[958]: wrote ssh authorized keys file for user: core Mar 17 17:42:21.283696 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:42:21.286219 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:42:21.288708 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:42:21.334239 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:42:21.433490 systemd-networkd[776]: eth0: Gained IPv6LL Mar 17 17:42:21.496716 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:42:21.496716 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:42:21.501294 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 17:42:22.010285 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:42:23.217769 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:42:23.217769 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:42:23.284688 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:42:23.287307 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:42:23.287307 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:42:23.287307 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:42:23.287307 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:42:23.287307 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:42:23.287307 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:42:23.287307 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:42:23.307899 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:42:23.311866 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:42:23.313472 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:42:23.313472 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:42:23.313472 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:42:23.313472 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:42:23.313472 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:42:23.313472 ignition[958]: INFO : files: files passed Mar 17 17:42:23.313472 ignition[958]: INFO : Ignition finished successfully Mar 17 17:42:23.316394 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:42:23.340533 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:42:23.343860 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:42:23.346193 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:42:23.346345 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:42:23.354925 initrd-setup-root-after-ignition[986]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:42:23.357963 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:42:23.357963 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:42:23.361131 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:42:23.360592 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:42:23.362692 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:42:23.372568 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:42:23.405033 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:42:23.405232 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:42:23.407212 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:42:23.408707 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:42:23.412567 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:42:23.414311 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:42:23.435572 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:42:23.447520 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:42:23.456894 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:42:23.458440 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:42:23.461176 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:42:23.463681 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:42:23.463833 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:42:23.466292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:42:23.468352 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:42:23.470814 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:42:23.473304 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:42:23.475737 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:42:23.478290 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:42:23.480855 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:42:23.483633 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:42:23.486073 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:42:23.488565 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:42:23.490756 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:42:23.490909 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:42:23.493746 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:42:23.495467 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:42:23.497951 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:42:23.498132 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:42:23.500646 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:42:23.500778 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:42:23.503712 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:42:23.503833 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:42:23.506031 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:42:23.508210 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:42:23.511400 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:42:23.513944 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:42:23.516398 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:42:23.518898 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:42:23.518999 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:42:23.521127 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:42:23.521214 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:42:23.523660 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:42:23.523779 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:42:23.526807 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:42:23.526920 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:42:23.541528 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:42:23.542657 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:42:23.542807 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:42:23.546235 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:42:23.547544 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:42:23.547866 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:42:23.555465 ignition[1012]: INFO : Ignition 2.20.0 Mar 17 17:42:23.555465 ignition[1012]: INFO : Stage: umount Mar 17 17:42:23.555465 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:42:23.555465 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:42:23.550645 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:42:23.564580 ignition[1012]: INFO : umount: umount passed Mar 17 17:42:23.564580 ignition[1012]: INFO : Ignition finished successfully Mar 17 17:42:23.550954 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:42:23.559117 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:42:23.559298 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:42:23.560930 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:42:23.561069 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:42:23.564547 systemd[1]: Stopped target network.target - Network. Mar 17 17:42:23.565748 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:42:23.565857 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:42:23.568218 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:42:23.568303 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:42:23.569416 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:42:23.569507 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:42:23.569884 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:42:23.569947 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:42:23.570587 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:42:23.571199 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:42:23.579232 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:42:23.579424 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:42:23.586068 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:42:23.586190 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 17 17:42:23.586989 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:42:23.587058 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:42:23.591215 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:42:23.591563 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:42:23.591702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:42:23.594900 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 17 17:42:23.595204 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:42:23.595340 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:42:23.597929 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:42:23.598021 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:42:23.600264 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:42:23.600427 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:42:23.615577 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:42:23.617513 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:42:23.617619 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:42:23.619739 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:42:23.619796 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:42:23.622155 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:42:23.622218 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:42:23.624161 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:42:23.627915 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 17:42:23.645706 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:42:23.645871 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:42:23.648048 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:42:23.648255 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:42:23.651796 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:42:23.651904 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:42:23.653175 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:42:23.653239 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:42:23.655173 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:42:23.655259 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:42:23.657724 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:42:23.657809 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:42:23.659308 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:42:23.659381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:42:23.670584 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:42:23.672736 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:42:23.672808 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:42:23.676404 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:42:23.676466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:42:23.680589 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 17 17:42:23.681963 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 17 17:42:23.683590 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:42:23.684703 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:42:23.687464 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:42:23.702451 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:42:23.712430 systemd[1]: Switching root. Mar 17 17:42:23.746267 systemd-journald[194]: Journal stopped Mar 17 17:42:24.977046 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 17 17:42:24.977133 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:42:24.977166 kernel: SELinux: policy capability open_perms=1 Mar 17 17:42:24.977187 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:42:24.977203 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:42:24.977218 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:42:24.977235 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:42:24.977251 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:42:24.977268 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:42:24.977286 kernel: audit: type=1403 audit(1742233344.157:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:42:24.977304 systemd[1]: Successfully loaded SELinux policy in 43.189ms. Mar 17 17:42:24.977382 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.614ms. Mar 17 17:42:24.977415 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 17 17:42:24.977433 systemd[1]: Detected virtualization kvm. Mar 17 17:42:24.977450 systemd[1]: Detected architecture x86-64. Mar 17 17:42:24.977466 systemd[1]: Detected first boot. Mar 17 17:42:24.977483 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:42:24.977507 zram_generator::config[1058]: No configuration found. Mar 17 17:42:24.977526 kernel: Guest personality initialized and is inactive Mar 17 17:42:24.977541 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 17 17:42:24.977568 kernel: Initialized host personality Mar 17 17:42:24.977584 kernel: NET: Registered PF_VSOCK protocol family Mar 17 17:42:24.977600 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:42:24.977618 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 17 17:42:24.977635 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:42:24.977652 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:42:24.977668 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:42:24.977685 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:42:24.977703 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:42:24.977724 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:42:24.977743 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:42:24.977761 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:42:24.977778 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:42:24.977794 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:42:24.977811 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:42:24.977828 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:42:24.977845 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:42:24.977862 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:42:24.977884 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:42:24.977901 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:42:24.977919 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:42:24.977936 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:42:24.977953 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:42:24.977969 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:42:24.977986 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:42:24.978007 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:42:24.978024 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:42:24.978053 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:42:24.978071 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:42:24.978088 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:42:24.978105 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:42:24.978124 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:42:24.978141 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:42:24.978158 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 17 17:42:24.978174 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:42:24.978196 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:42:24.978213 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:42:24.978230 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:42:24.978246 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:42:24.978263 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:42:24.978280 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:42:24.978297 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:24.981072 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:42:24.981108 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:42:24.981128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:42:24.981146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:42:24.981162 systemd[1]: Reached target machines.target - Containers. Mar 17 17:42:24.981179 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:42:24.981196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:42:24.981213 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:42:24.981230 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:42:24.981246 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:42:24.981266 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:42:24.981282 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:42:24.981299 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:42:24.981331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:42:24.981355 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:42:24.981371 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:42:24.981387 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:42:24.981403 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:42:24.981423 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:42:24.981441 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:42:24.981457 kernel: loop: module loaded Mar 17 17:42:24.981473 kernel: fuse: init (API version 7.39) Mar 17 17:42:24.981488 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:42:24.981506 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:42:24.981523 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:42:24.981539 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:42:24.981555 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 17 17:42:24.981576 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:42:24.981622 systemd-journald[1129]: Collecting audit messages is disabled. Mar 17 17:42:24.981652 kernel: ACPI: bus type drm_connector registered Mar 17 17:42:24.981673 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:42:24.981690 systemd[1]: Stopped verity-setup.service. Mar 17 17:42:24.981707 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:24.981723 systemd-journald[1129]: Journal started Mar 17 17:42:24.981752 systemd-journald[1129]: Runtime Journal (/run/log/journal/7764b01272004fc9bebfbdf7a7c2a527) is 6M, max 48.2M, 42.2M free. Mar 17 17:42:24.731729 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:42:24.745283 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:42:24.745758 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:42:24.990349 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:42:24.992594 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:42:24.993992 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:42:24.995452 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:42:24.996736 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:42:24.998207 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:42:24.999678 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:42:25.001233 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:42:25.003011 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:42:25.004949 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:42:25.005228 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:42:25.006994 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:42:25.007262 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:42:25.008975 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:42:25.009255 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:42:25.010945 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:42:25.011218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:42:25.012980 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:42:25.013239 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:42:25.014885 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:42:25.015126 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:42:25.016786 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:42:25.018725 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:42:25.020862 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:42:25.022763 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 17 17:42:25.037613 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:42:25.052556 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:42:25.055823 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:42:25.057508 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:42:25.057557 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:42:25.060272 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 17 17:42:25.063421 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:42:25.066463 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:42:25.068270 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:42:25.072056 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:42:25.077264 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:42:25.079081 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:42:25.081189 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:42:25.082406 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:42:25.085809 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:42:25.090564 systemd-journald[1129]: Time spent on flushing to /var/log/journal/7764b01272004fc9bebfbdf7a7c2a527 is 32.859ms for 1051 entries. Mar 17 17:42:25.090564 systemd-journald[1129]: System Journal (/var/log/journal/7764b01272004fc9bebfbdf7a7c2a527) is 8M, max 195.6M, 187.6M free. Mar 17 17:42:25.148971 systemd-journald[1129]: Received client request to flush runtime journal. Mar 17 17:42:25.149129 kernel: loop0: detected capacity change from 0 to 138176 Mar 17 17:42:25.092526 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:42:25.098652 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:42:25.103491 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:42:25.105224 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:42:25.106967 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:42:25.108754 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:42:25.110482 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:42:25.121499 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:42:25.134943 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 17 17:42:25.138065 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:42:25.152839 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:42:25.155271 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:42:25.167093 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 17 17:42:25.173672 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:42:25.173619 udevadm[1189]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:42:25.189004 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:42:25.198350 kernel: loop1: detected capacity change from 0 to 205544 Mar 17 17:42:25.198640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:42:25.223369 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 17 17:42:25.223388 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Mar 17 17:42:25.230130 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:42:25.238360 kernel: loop2: detected capacity change from 0 to 147912 Mar 17 17:42:25.283665 kernel: loop3: detected capacity change from 0 to 138176 Mar 17 17:42:25.296613 kernel: loop4: detected capacity change from 0 to 205544 Mar 17 17:42:25.312382 kernel: loop5: detected capacity change from 0 to 147912 Mar 17 17:42:25.325567 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:42:25.326415 (sd-merge)[1202]: Merged extensions into '/usr'. Mar 17 17:42:25.332344 systemd[1]: Reload requested from client PID 1178 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:42:25.332368 systemd[1]: Reloading... Mar 17 17:42:25.412547 zram_generator::config[1233]: No configuration found. Mar 17 17:42:25.461671 ldconfig[1173]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:42:25.541303 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:42:25.608280 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:42:25.608467 systemd[1]: Reloading finished in 274 ms. Mar 17 17:42:25.633100 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:42:25.634831 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:42:25.663145 systemd[1]: Starting ensure-sysext.service... Mar 17 17:42:25.665367 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:42:25.679373 systemd[1]: Reload requested from client PID 1267 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:42:25.679395 systemd[1]: Reloading... Mar 17 17:42:25.691311 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:42:25.691610 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:42:25.692585 systemd-tmpfiles[1268]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:42:25.692863 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 17 17:42:25.692946 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Mar 17 17:42:25.697412 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:42:25.697424 systemd-tmpfiles[1268]: Skipping /boot Mar 17 17:42:25.711125 systemd-tmpfiles[1268]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:42:25.711141 systemd-tmpfiles[1268]: Skipping /boot Mar 17 17:42:25.743903 zram_generator::config[1297]: No configuration found. Mar 17 17:42:25.867852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:42:25.942425 systemd[1]: Reloading finished in 262 ms. Mar 17 17:42:25.957213 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:42:25.976433 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:42:25.986254 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:42:25.988834 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:42:25.991365 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:42:25.995540 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:42:26.001211 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:42:26.005762 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:42:26.012778 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:26.013041 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:42:26.014680 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:42:26.017280 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:42:26.023036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:42:26.024312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:42:26.024471 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:42:26.029500 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:42:26.030515 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:26.032523 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:42:26.034859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:42:26.035416 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:42:26.037241 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:42:26.037476 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:42:26.040753 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:42:26.041051 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:42:26.050925 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Mar 17 17:42:26.050948 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:42:26.051163 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:42:26.056570 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:42:26.058633 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:42:26.065551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:26.065856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:42:26.069880 augenrules[1370]: No rules Mar 17 17:42:26.076576 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:42:26.080220 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:42:26.085094 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:42:26.086296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:42:26.086444 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:42:26.086555 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:26.089989 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:42:26.090301 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:42:26.092026 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:42:26.094226 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:42:26.096635 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:42:26.096914 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:42:26.100073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:42:26.100397 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:42:26.102028 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:42:26.104195 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:42:26.104471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:42:26.120905 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:42:26.131606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:26.140531 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:42:26.142034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:42:26.143500 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:42:26.156500 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:42:26.159488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:42:26.164484 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:42:26.165955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:42:26.166025 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 17 17:42:26.168599 augenrules[1407]: /sbin/augenrules: No change Mar 17 17:42:26.168434 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:42:26.169615 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:42:26.169643 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:42:26.170419 systemd[1]: Finished ensure-sysext.service. Mar 17 17:42:26.171922 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:42:26.172199 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:42:26.176772 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:42:26.177021 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:42:26.194056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1394) Mar 17 17:42:26.195594 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:42:26.198915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:42:26.199178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:42:26.202676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:42:26.203703 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:42:26.216508 augenrules[1442]: No rules Mar 17 17:42:26.218090 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:42:26.218397 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:42:26.219984 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:42:26.220252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:42:26.221104 systemd-resolved[1339]: Positive Trust Anchors: Mar 17 17:42:26.222528 systemd-resolved[1339]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:42:26.222564 systemd-resolved[1339]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:42:26.228745 systemd-resolved[1339]: Defaulting to hostname 'linux'. Mar 17 17:42:26.235611 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:42:26.243002 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:42:26.244435 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:42:26.265504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:42:26.267385 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:42:26.276371 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:42:26.282775 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:42:26.288424 systemd-networkd[1424]: lo: Link UP Mar 17 17:42:26.288439 systemd-networkd[1424]: lo: Gained carrier Mar 17 17:42:26.292116 systemd-networkd[1424]: Enumeration completed Mar 17 17:42:26.292232 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:42:26.293662 systemd[1]: Reached target network.target - Network. Mar 17 17:42:26.295115 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:42:26.295129 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:42:26.299336 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:42:26.300647 systemd-networkd[1424]: eth0: Link UP Mar 17 17:42:26.300660 systemd-networkd[1424]: eth0: Gained carrier Mar 17 17:42:26.300680 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:42:26.308564 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 17 17:42:26.321441 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.14/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:42:26.323906 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:42:26.327892 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:42:26.341414 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 17 17:42:26.347156 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:42:26.347459 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:42:26.347717 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:42:26.357937 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:42:26.375114 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:42:26.793264 systemd-resolved[1339]: Clock change detected. Flushing caches. Mar 17 17:42:26.794335 systemd-timesyncd[1436]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:42:26.794385 systemd-timesyncd[1436]: Initial clock synchronization to Mon 2025-03-17 17:42:26.793192 UTC. Mar 17 17:42:26.794910 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:42:26.798766 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 17 17:42:26.807212 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:42:26.807955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:42:26.812861 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:42:26.825443 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:42:26.837338 kernel: kvm_amd: TSC scaling supported Mar 17 17:42:26.837419 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:42:26.837433 kernel: kvm_amd: Nested Paging enabled Mar 17 17:42:26.838398 kernel: kvm_amd: LBR virtualization supported Mar 17 17:42:26.838427 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:42:26.839511 kernel: kvm_amd: Virtual GIF supported Mar 17 17:42:26.860278 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:42:26.882315 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:42:26.899018 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:42:26.908402 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:42:26.918594 lvm[1475]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:42:26.956927 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:42:26.958591 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:42:26.959765 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:42:26.960982 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:42:26.962349 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:42:26.963848 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:42:26.965143 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:42:26.966464 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:42:26.967750 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:42:26.967786 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:42:26.968748 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:42:26.970654 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:42:26.974067 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:42:26.978656 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 17 17:42:26.980305 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 17 17:42:26.981606 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 17 17:42:26.989305 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:42:26.991071 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 17 17:42:26.994037 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:42:26.995869 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:42:26.997148 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:42:26.998164 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:42:26.999284 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:42:26.999320 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:42:27.000503 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:42:27.002803 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:42:27.006799 lvm[1479]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:42:27.007370 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:42:27.010833 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:42:27.012141 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:42:27.014596 jq[1482]: false Mar 17 17:42:27.015482 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:42:27.021891 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:42:27.025462 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:42:27.028473 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:42:27.036969 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:42:27.038369 extend-filesystems[1483]: Found loop3 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found loop4 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found loop5 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found sr0 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda1 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda2 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda3 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found usr Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda4 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda6 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda7 Mar 17 17:42:27.038369 extend-filesystems[1483]: Found vda9 Mar 17 17:42:27.038369 extend-filesystems[1483]: Checking size of /dev/vda9 Mar 17 17:42:27.102030 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:42:27.103591 extend-filesystems[1483]: Resized partition /dev/vda9 Mar 17 17:42:27.054794 dbus-daemon[1481]: [system] SELinux support is enabled Mar 17 17:42:27.041099 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:42:27.105608 extend-filesystems[1504]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:42:27.134968 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:42:27.135030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (1402) Mar 17 17:42:27.041700 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:42:27.136563 extend-filesystems[1504]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:42:27.136563 extend-filesystems[1504]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:42:27.136563 extend-filesystems[1504]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:42:27.046083 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:42:27.162838 jq[1501]: true Mar 17 17:42:27.163078 sshd_keygen[1503]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:42:27.163201 extend-filesystems[1483]: Resized filesystem in /dev/vda9 Mar 17 17:42:27.050998 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:42:27.175265 update_engine[1498]: I20250317 17:42:27.080734 1498 main.cc:92] Flatcar Update Engine starting Mar 17 17:42:27.175265 update_engine[1498]: I20250317 17:42:27.086652 1498 update_check_scheduler.cc:74] Next update check in 2m0s Mar 17 17:42:27.053815 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:42:27.055380 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:42:27.177255 tar[1506]: linux-amd64/helm Mar 17 17:42:27.062020 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:42:27.177622 jq[1507]: true Mar 17 17:42:27.062319 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:42:27.062705 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:42:27.063006 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:42:27.067195 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:42:27.067615 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:42:27.082976 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:42:27.083000 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:42:27.086388 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:42:27.086413 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:42:27.088198 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:42:27.089071 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:42:27.094988 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:42:27.142752 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:42:27.143111 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:42:27.166681 systemd-logind[1494]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:42:27.166714 systemd-logind[1494]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:42:27.179640 systemd-logind[1494]: New seat seat0. Mar 17 17:42:27.183325 bash[1538]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:42:27.186745 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:42:27.188551 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:42:27.190342 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:42:27.199335 locksmithd[1516]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:42:27.210688 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:42:27.212472 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:42:27.225553 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:42:27.225989 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:42:27.235579 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:42:27.249974 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:42:27.258748 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:42:27.261695 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:42:27.262967 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:42:27.337840 containerd[1511]: time="2025-03-17T17:42:27.335463038Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:42:27.365733 containerd[1511]: time="2025-03-17T17:42:27.365662683Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368340 containerd[1511]: time="2025-03-17T17:42:27.367681579Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368340 containerd[1511]: time="2025-03-17T17:42:27.367714831Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:42:27.368340 containerd[1511]: time="2025-03-17T17:42:27.367732414Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:42:27.368340 containerd[1511]: time="2025-03-17T17:42:27.367930476Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:42:27.368340 containerd[1511]: time="2025-03-17T17:42:27.367945103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368340 containerd[1511]: time="2025-03-17T17:42:27.368024692Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368340 containerd[1511]: time="2025-03-17T17:42:27.368042255Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368613 containerd[1511]: time="2025-03-17T17:42:27.368592257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368684 containerd[1511]: time="2025-03-17T17:42:27.368669081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368737 containerd[1511]: time="2025-03-17T17:42:27.368723844Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368781 containerd[1511]: time="2025-03-17T17:42:27.368769810Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:42:27.368931 containerd[1511]: time="2025-03-17T17:42:27.368904082Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:42:27.369348 containerd[1511]: time="2025-03-17T17:42:27.369318869Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:42:27.369621 containerd[1511]: time="2025-03-17T17:42:27.369598394Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:42:27.369682 containerd[1511]: time="2025-03-17T17:42:27.369668565Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:42:27.369823 containerd[1511]: time="2025-03-17T17:42:27.369807546Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:42:27.369941 containerd[1511]: time="2025-03-17T17:42:27.369920838Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:42:27.377796 containerd[1511]: time="2025-03-17T17:42:27.377773044Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:42:27.377896 containerd[1511]: time="2025-03-17T17:42:27.377880556Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:42:27.377964 containerd[1511]: time="2025-03-17T17:42:27.377949866Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:42:27.378019 containerd[1511]: time="2025-03-17T17:42:27.378007474Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:42:27.378071 containerd[1511]: time="2025-03-17T17:42:27.378059181Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:42:27.378273 containerd[1511]: time="2025-03-17T17:42:27.378255168Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:42:27.378637 containerd[1511]: time="2025-03-17T17:42:27.378597240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:42:27.378799 containerd[1511]: time="2025-03-17T17:42:27.378770975Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:42:27.378827 containerd[1511]: time="2025-03-17T17:42:27.378798036Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:42:27.378827 containerd[1511]: time="2025-03-17T17:42:27.378817272Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:42:27.378863 containerd[1511]: time="2025-03-17T17:42:27.378834965Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.378863 containerd[1511]: time="2025-03-17T17:42:27.378850054Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.378929 containerd[1511]: time="2025-03-17T17:42:27.378865352Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.378929 containerd[1511]: time="2025-03-17T17:42:27.378881162Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.378995 containerd[1511]: time="2025-03-17T17:42:27.378928160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.378995 containerd[1511]: time="2025-03-17T17:42:27.378969568Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.378995 containerd[1511]: time="2025-03-17T17:42:27.378987161Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.379052 containerd[1511]: time="2025-03-17T17:42:27.379001778Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:42:27.379052 containerd[1511]: time="2025-03-17T17:42:27.379024130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379052 containerd[1511]: time="2025-03-17T17:42:27.379039810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379114 containerd[1511]: time="2025-03-17T17:42:27.379053465Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379114 containerd[1511]: time="2025-03-17T17:42:27.379067041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379114 containerd[1511]: time="2025-03-17T17:42:27.379080566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379114 containerd[1511]: time="2025-03-17T17:42:27.379096596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379187 containerd[1511]: time="2025-03-17T17:42:27.379111765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379187 containerd[1511]: time="2025-03-17T17:42:27.379127434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379187 containerd[1511]: time="2025-03-17T17:42:27.379142903Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379187 containerd[1511]: time="2025-03-17T17:42:27.379160756Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379187 containerd[1511]: time="2025-03-17T17:42:27.379176316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379311 containerd[1511]: time="2025-03-17T17:42:27.379191454Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379311 containerd[1511]: time="2025-03-17T17:42:27.379206422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379311 containerd[1511]: time="2025-03-17T17:42:27.379224306Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:42:27.379311 containerd[1511]: time="2025-03-17T17:42:27.379264130Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379311 containerd[1511]: time="2025-03-17T17:42:27.379296892Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.379400 containerd[1511]: time="2025-03-17T17:42:27.379311970Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:42:27.380285 containerd[1511]: time="2025-03-17T17:42:27.380261591Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:42:27.380329 containerd[1511]: time="2025-03-17T17:42:27.380294032Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:42:27.380329 containerd[1511]: time="2025-03-17T17:42:27.380307998Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:42:27.380329 containerd[1511]: time="2025-03-17T17:42:27.380321744Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:42:27.380385 containerd[1511]: time="2025-03-17T17:42:27.380332955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.380385 containerd[1511]: time="2025-03-17T17:42:27.380347662Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:42:27.380385 containerd[1511]: time="2025-03-17T17:42:27.380359064Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:42:27.380385 containerd[1511]: time="2025-03-17T17:42:27.380370595Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:42:27.380754 containerd[1511]: time="2025-03-17T17:42:27.380682600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:42:27.380754 containerd[1511]: time="2025-03-17T17:42:27.380753503Z" level=info msg="Connect containerd service" Mar 17 17:42:27.380920 containerd[1511]: time="2025-03-17T17:42:27.380785724Z" level=info msg="using legacy CRI server" Mar 17 17:42:27.380920 containerd[1511]: time="2025-03-17T17:42:27.380795121Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:42:27.380959 containerd[1511]: time="2025-03-17T17:42:27.380916529Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:42:27.381766 containerd[1511]: time="2025-03-17T17:42:27.381731718Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:42:27.382192 containerd[1511]: time="2025-03-17T17:42:27.381924860Z" level=info msg="Start subscribing containerd event" Mar 17 17:42:27.382192 containerd[1511]: time="2025-03-17T17:42:27.382070093Z" level=info msg="Start recovering state" Mar 17 17:42:27.382192 containerd[1511]: time="2025-03-17T17:42:27.382130215Z" level=info msg="Start event monitor" Mar 17 17:42:27.382192 containerd[1511]: time="2025-03-17T17:42:27.382147999Z" level=info msg="Start snapshots syncer" Mar 17 17:42:27.382192 containerd[1511]: time="2025-03-17T17:42:27.382068469Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:42:27.382326 containerd[1511]: time="2025-03-17T17:42:27.382156124Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:42:27.382326 containerd[1511]: time="2025-03-17T17:42:27.382280998Z" level=info msg="Start streaming server" Mar 17 17:42:27.382404 containerd[1511]: time="2025-03-17T17:42:27.382225193Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:42:27.382533 containerd[1511]: time="2025-03-17T17:42:27.382517922Z" level=info msg="containerd successfully booted in 0.048576s" Mar 17 17:42:27.382650 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:42:27.562157 tar[1506]: linux-amd64/LICENSE Mar 17 17:42:27.562157 tar[1506]: linux-amd64/README.md Mar 17 17:42:27.581735 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:42:28.633640 systemd-networkd[1424]: eth0: Gained IPv6LL Mar 17 17:42:28.638098 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:42:28.640446 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:42:28.655670 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:42:28.659407 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:28.662869 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:42:28.682322 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:42:28.682618 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:42:28.685157 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:42:28.691111 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:42:30.106747 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:30.108668 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:42:30.110133 systemd[1]: Startup finished in 1.393s (kernel) + 7.434s (initrd) + 5.577s (userspace) = 14.405s. Mar 17 17:42:30.113273 (kubelet)[1594]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:30.714015 kubelet[1594]: E0317 17:42:30.713939 1594 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:30.719120 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:30.719404 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:30.719938 systemd[1]: kubelet.service: Consumed 1.914s CPU time, 236.3M memory peak. Mar 17 17:42:31.610937 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:42:31.623702 systemd[1]: Started sshd@0-10.0.0.14:22-10.0.0.1:49462.service - OpenSSH per-connection server daemon (10.0.0.1:49462). Mar 17 17:42:31.675371 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 49462 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:42:31.678018 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:31.690492 systemd-logind[1494]: New session 1 of user core. Mar 17 17:42:31.692003 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:42:31.699564 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:42:31.712335 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:42:31.733540 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:42:31.736733 (systemd)[1611]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:42:31.739068 systemd-logind[1494]: New session c1 of user core. Mar 17 17:42:31.907073 systemd[1611]: Queued start job for default target default.target. Mar 17 17:42:31.921157 systemd[1611]: Created slice app.slice - User Application Slice. Mar 17 17:42:31.921191 systemd[1611]: Reached target paths.target - Paths. Mar 17 17:42:31.921261 systemd[1611]: Reached target timers.target - Timers. Mar 17 17:42:31.923225 systemd[1611]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:42:31.934783 systemd[1611]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:42:31.934955 systemd[1611]: Reached target sockets.target - Sockets. Mar 17 17:42:31.935007 systemd[1611]: Reached target basic.target - Basic System. Mar 17 17:42:31.935053 systemd[1611]: Reached target default.target - Main User Target. Mar 17 17:42:31.935088 systemd[1611]: Startup finished in 189ms. Mar 17 17:42:31.935668 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:42:31.938003 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:42:32.007709 systemd[1]: Started sshd@1-10.0.0.14:22-10.0.0.1:49478.service - OpenSSH per-connection server daemon (10.0.0.1:49478). Mar 17 17:42:32.048806 sshd[1622]: Accepted publickey for core from 10.0.0.1 port 49478 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:42:32.051185 sshd-session[1622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:32.057598 systemd-logind[1494]: New session 2 of user core. Mar 17 17:42:32.067603 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:42:32.123531 sshd[1624]: Connection closed by 10.0.0.1 port 49478 Mar 17 17:42:32.123933 sshd-session[1622]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:32.137352 systemd[1]: sshd@1-10.0.0.14:22-10.0.0.1:49478.service: Deactivated successfully. Mar 17 17:42:32.139426 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:42:32.141068 systemd-logind[1494]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:42:32.151525 systemd[1]: Started sshd@2-10.0.0.14:22-10.0.0.1:49492.service - OpenSSH per-connection server daemon (10.0.0.1:49492). Mar 17 17:42:32.152746 systemd-logind[1494]: Removed session 2. Mar 17 17:42:32.187511 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 49492 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:42:32.189188 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:32.194205 systemd-logind[1494]: New session 3 of user core. Mar 17 17:42:32.203461 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:42:32.254600 sshd[1632]: Connection closed by 10.0.0.1 port 49492 Mar 17 17:42:32.255057 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:32.268829 systemd[1]: sshd@2-10.0.0.14:22-10.0.0.1:49492.service: Deactivated successfully. Mar 17 17:42:32.271190 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:42:32.273151 systemd-logind[1494]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:42:32.274595 systemd[1]: Started sshd@3-10.0.0.14:22-10.0.0.1:49506.service - OpenSSH per-connection server daemon (10.0.0.1:49506). Mar 17 17:42:32.275473 systemd-logind[1494]: Removed session 3. Mar 17 17:42:32.316529 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 49506 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:42:32.318353 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:32.322936 systemd-logind[1494]: New session 4 of user core. Mar 17 17:42:32.333382 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:42:32.387968 sshd[1640]: Connection closed by 10.0.0.1 port 49506 Mar 17 17:42:32.388426 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:32.406589 systemd[1]: sshd@3-10.0.0.14:22-10.0.0.1:49506.service: Deactivated successfully. Mar 17 17:42:32.408828 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:42:32.410428 systemd-logind[1494]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:42:32.426750 systemd[1]: Started sshd@4-10.0.0.14:22-10.0.0.1:49516.service - OpenSSH per-connection server daemon (10.0.0.1:49516). Mar 17 17:42:32.428052 systemd-logind[1494]: Removed session 4. Mar 17 17:42:32.465068 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 49516 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:42:32.467216 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:32.472221 systemd-logind[1494]: New session 5 of user core. Mar 17 17:42:32.482372 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:42:32.592576 sudo[1649]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:42:32.592939 sudo[1649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:32.615350 sudo[1649]: pam_unix(sudo:session): session closed for user root Mar 17 17:42:32.617210 sshd[1648]: Connection closed by 10.0.0.1 port 49516 Mar 17 17:42:32.617826 sshd-session[1645]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:32.642319 systemd[1]: sshd@4-10.0.0.14:22-10.0.0.1:49516.service: Deactivated successfully. Mar 17 17:42:32.645061 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:42:32.646113 systemd-logind[1494]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:42:32.657673 systemd[1]: Started sshd@5-10.0.0.14:22-10.0.0.1:49524.service - OpenSSH per-connection server daemon (10.0.0.1:49524). Mar 17 17:42:32.658796 systemd-logind[1494]: Removed session 5. Mar 17 17:42:32.695356 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 49524 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:42:32.696956 sshd-session[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:32.701749 systemd-logind[1494]: New session 6 of user core. Mar 17 17:42:32.720460 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:42:32.777431 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:42:32.777912 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:32.781926 sudo[1659]: pam_unix(sudo:session): session closed for user root Mar 17 17:42:32.789590 sudo[1658]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:42:32.790026 sudo[1658]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:32.810636 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:42:32.848722 augenrules[1681]: No rules Mar 17 17:42:32.850262 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:42:32.850584 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:42:32.852067 sudo[1658]: pam_unix(sudo:session): session closed for user root Mar 17 17:42:32.854039 sshd[1657]: Connection closed by 10.0.0.1 port 49524 Mar 17 17:42:32.854425 sshd-session[1654]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:32.863906 systemd[1]: sshd@5-10.0.0.14:22-10.0.0.1:49524.service: Deactivated successfully. Mar 17 17:42:32.866353 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:42:32.868653 systemd-logind[1494]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:42:32.879785 systemd[1]: Started sshd@6-10.0.0.14:22-10.0.0.1:49538.service - OpenSSH per-connection server daemon (10.0.0.1:49538). Mar 17 17:42:32.881345 systemd-logind[1494]: Removed session 6. Mar 17 17:42:32.919016 sshd[1689]: Accepted publickey for core from 10.0.0.1 port 49538 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:42:32.920702 sshd-session[1689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:32.926447 systemd-logind[1494]: New session 7 of user core. Mar 17 17:42:32.940561 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:42:32.999428 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:42:32.999894 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:42:33.551485 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:42:33.551707 (dockerd)[1714]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:42:34.174115 dockerd[1714]: time="2025-03-17T17:42:34.173996729Z" level=info msg="Starting up" Mar 17 17:42:34.581523 dockerd[1714]: time="2025-03-17T17:42:34.581351385Z" level=info msg="Loading containers: start." Mar 17 17:42:34.774279 kernel: Initializing XFRM netlink socket Mar 17 17:42:34.893190 systemd-networkd[1424]: docker0: Link UP Mar 17 17:42:34.941348 dockerd[1714]: time="2025-03-17T17:42:34.941285279Z" level=info msg="Loading containers: done." Mar 17 17:42:34.968496 dockerd[1714]: time="2025-03-17T17:42:34.968433943Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:42:34.968697 dockerd[1714]: time="2025-03-17T17:42:34.968573454Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Mar 17 17:42:34.968743 dockerd[1714]: time="2025-03-17T17:42:34.968715220Z" level=info msg="Daemon has completed initialization" Mar 17 17:42:35.043497 dockerd[1714]: time="2025-03-17T17:42:35.043400482Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:42:35.043631 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:42:35.886745 containerd[1511]: time="2025-03-17T17:42:35.886694459Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:42:36.536867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount80014985.mount: Deactivated successfully. Mar 17 17:42:37.821151 containerd[1511]: time="2025-03-17T17:42:37.821085883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:37.821999 containerd[1511]: time="2025-03-17T17:42:37.821968649Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959268" Mar 17 17:42:37.823306 containerd[1511]: time="2025-03-17T17:42:37.823215317Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:37.827993 containerd[1511]: time="2025-03-17T17:42:37.827929057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:37.829516 containerd[1511]: time="2025-03-17T17:42:37.829459918Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 1.942716888s" Mar 17 17:42:37.829516 containerd[1511]: time="2025-03-17T17:42:37.829513328Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 17:42:37.831274 containerd[1511]: time="2025-03-17T17:42:37.831223325Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:42:39.240376 containerd[1511]: time="2025-03-17T17:42:39.240317379Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:39.241087 containerd[1511]: time="2025-03-17T17:42:39.241045274Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713776" Mar 17 17:42:39.242170 containerd[1511]: time="2025-03-17T17:42:39.242145598Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:39.244961 containerd[1511]: time="2025-03-17T17:42:39.244920101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:39.245981 containerd[1511]: time="2025-03-17T17:42:39.245947588Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 1.41467501s" Mar 17 17:42:39.246015 containerd[1511]: time="2025-03-17T17:42:39.245984307Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 17:42:39.246558 containerd[1511]: time="2025-03-17T17:42:39.246412099Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:42:40.639538 containerd[1511]: time="2025-03-17T17:42:40.639473336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:40.640565 containerd[1511]: time="2025-03-17T17:42:40.640514569Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780368" Mar 17 17:42:40.641902 containerd[1511]: time="2025-03-17T17:42:40.641870672Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:40.644814 containerd[1511]: time="2025-03-17T17:42:40.644760080Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:40.646101 containerd[1511]: time="2025-03-17T17:42:40.646032306Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 1.399594058s" Mar 17 17:42:40.646101 containerd[1511]: time="2025-03-17T17:42:40.646068254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 17:42:40.646905 containerd[1511]: time="2025-03-17T17:42:40.646873534Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:42:40.894771 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:42:40.911533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:41.125237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:41.130592 (kubelet)[1983]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:41.302979 kubelet[1983]: E0317 17:42:41.302788 1983 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:41.309594 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:41.309871 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:41.310293 systemd[1]: kubelet.service: Consumed 317ms CPU time, 98.6M memory peak. Mar 17 17:42:42.188888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2663406516.mount: Deactivated successfully. Mar 17 17:42:43.123167 containerd[1511]: time="2025-03-17T17:42:43.123078081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:43.124211 containerd[1511]: time="2025-03-17T17:42:43.124161253Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 17 17:42:43.125301 containerd[1511]: time="2025-03-17T17:42:43.125265614Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:43.127353 containerd[1511]: time="2025-03-17T17:42:43.127303375Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:43.128106 containerd[1511]: time="2025-03-17T17:42:43.128060034Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 2.481156304s" Mar 17 17:42:43.128106 containerd[1511]: time="2025-03-17T17:42:43.128095110Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 17:42:43.128767 containerd[1511]: time="2025-03-17T17:42:43.128709683Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:42:43.614435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018655608.mount: Deactivated successfully. Mar 17 17:42:44.682506 containerd[1511]: time="2025-03-17T17:42:44.682431275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:44.683382 containerd[1511]: time="2025-03-17T17:42:44.683331133Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:42:44.684581 containerd[1511]: time="2025-03-17T17:42:44.684542084Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:44.687541 containerd[1511]: time="2025-03-17T17:42:44.687512073Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:44.688732 containerd[1511]: time="2025-03-17T17:42:44.688698879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.559939053s" Mar 17 17:42:44.688732 containerd[1511]: time="2025-03-17T17:42:44.688729627Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:42:44.689300 containerd[1511]: time="2025-03-17T17:42:44.689273216Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:42:45.334009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3596054733.mount: Deactivated successfully. Mar 17 17:42:45.515263 containerd[1511]: time="2025-03-17T17:42:45.515192590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:45.529281 containerd[1511]: time="2025-03-17T17:42:45.529223753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 17 17:42:45.642560 containerd[1511]: time="2025-03-17T17:42:45.642351318Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:45.676015 containerd[1511]: time="2025-03-17T17:42:45.675936833Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:45.676973 containerd[1511]: time="2025-03-17T17:42:45.676937489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 987.555319ms" Mar 17 17:42:45.676973 containerd[1511]: time="2025-03-17T17:42:45.676966754Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 17:42:45.677495 containerd[1511]: time="2025-03-17T17:42:45.677470789Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:42:46.969020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861553411.mount: Deactivated successfully. Mar 17 17:42:49.887538 containerd[1511]: time="2025-03-17T17:42:49.887457848Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:49.888179 containerd[1511]: time="2025-03-17T17:42:49.888126442Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Mar 17 17:42:49.889959 containerd[1511]: time="2025-03-17T17:42:49.889902052Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:49.894040 containerd[1511]: time="2025-03-17T17:42:49.894002682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:49.895665 containerd[1511]: time="2025-03-17T17:42:49.895607622Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.218109111s" Mar 17 17:42:49.895734 containerd[1511]: time="2025-03-17T17:42:49.895675669Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 17:42:51.394609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:42:51.404489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:51.554710 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:51.559168 (kubelet)[2133]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:42:51.597868 kubelet[2133]: E0317 17:42:51.597795 2133 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:42:51.602165 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:42:51.602421 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:42:51.602794 systemd[1]: kubelet.service: Consumed 197ms CPU time, 97.3M memory peak. Mar 17 17:42:52.464955 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:52.465126 systemd[1]: kubelet.service: Consumed 197ms CPU time, 97.3M memory peak. Mar 17 17:42:52.476495 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:52.502822 systemd[1]: Reload requested from client PID 2148 ('systemctl') (unit session-7.scope)... Mar 17 17:42:52.502838 systemd[1]: Reloading... Mar 17 17:42:52.605885 zram_generator::config[2192]: No configuration found. Mar 17 17:42:53.156891 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:42:53.260352 systemd[1]: Reloading finished in 757 ms. Mar 17 17:42:53.309847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:53.313684 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:53.315903 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:42:53.316207 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:53.316310 systemd[1]: kubelet.service: Consumed 147ms CPU time, 83.6M memory peak. Mar 17 17:42:53.317965 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:53.464104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:53.468376 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:42:53.507593 kubelet[2242]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:42:53.507593 kubelet[2242]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:42:53.507593 kubelet[2242]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:42:53.508034 kubelet[2242]: I0317 17:42:53.507672 2242 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:42:53.888785 kubelet[2242]: I0317 17:42:53.888635 2242 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:42:53.888785 kubelet[2242]: I0317 17:42:53.888686 2242 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:42:53.889584 kubelet[2242]: I0317 17:42:53.888977 2242 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:42:53.910611 kubelet[2242]: I0317 17:42:53.910563 2242 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:42:53.913798 kubelet[2242]: E0317 17:42:53.913757 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:53.920830 kubelet[2242]: E0317 17:42:53.920782 2242 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:42:53.920830 kubelet[2242]: I0317 17:42:53.920822 2242 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:42:53.927806 kubelet[2242]: I0317 17:42:53.927756 2242 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:42:53.929340 kubelet[2242]: I0317 17:42:53.929303 2242 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:42:53.929548 kubelet[2242]: I0317 17:42:53.929497 2242 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:42:53.929764 kubelet[2242]: I0317 17:42:53.929537 2242 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:42:53.929764 kubelet[2242]: I0317 17:42:53.929762 2242 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:42:53.929901 kubelet[2242]: I0317 17:42:53.929772 2242 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:42:53.929928 kubelet[2242]: I0317 17:42:53.929920 2242 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:42:53.932179 kubelet[2242]: I0317 17:42:53.932132 2242 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:42:53.932179 kubelet[2242]: I0317 17:42:53.932157 2242 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:42:53.932288 kubelet[2242]: I0317 17:42:53.932218 2242 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:42:53.932288 kubelet[2242]: I0317 17:42:53.932259 2242 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:42:53.935809 kubelet[2242]: W0317 17:42:53.935740 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:53.935809 kubelet[2242]: E0317 17:42:53.935800 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:53.937344 kubelet[2242]: W0317 17:42:53.937260 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:53.937503 kubelet[2242]: E0317 17:42:53.937354 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:53.939556 kubelet[2242]: I0317 17:42:53.939534 2242 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:42:53.941776 kubelet[2242]: I0317 17:42:53.941755 2242 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:42:53.942557 kubelet[2242]: W0317 17:42:53.942529 2242 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:42:53.943252 kubelet[2242]: I0317 17:42:53.943224 2242 server.go:1269] "Started kubelet" Mar 17 17:42:53.944281 kubelet[2242]: I0317 17:42:53.943941 2242 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:42:53.944646 kubelet[2242]: I0317 17:42:53.944619 2242 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:42:53.944753 kubelet[2242]: I0317 17:42:53.944719 2242 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:42:53.944984 kubelet[2242]: I0317 17:42:53.944953 2242 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:42:53.945182 kubelet[2242]: I0317 17:42:53.945101 2242 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:42:53.946175 kubelet[2242]: I0317 17:42:53.946138 2242 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:42:53.953009 kubelet[2242]: I0317 17:42:53.952978 2242 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:42:53.953166 kubelet[2242]: I0317 17:42:53.953145 2242 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:42:53.953333 kubelet[2242]: I0317 17:42:53.953312 2242 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:42:53.954436 kubelet[2242]: W0317 17:42:53.954380 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:53.954505 kubelet[2242]: E0317 17:42:53.954452 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:53.954944 kubelet[2242]: E0317 17:42:53.954908 2242 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:42:53.956466 kubelet[2242]: E0317 17:42:53.956315 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:53.956603 kubelet[2242]: E0317 17:42:53.956570 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="200ms" Mar 17 17:42:53.957714 kubelet[2242]: E0317 17:42:53.954718 2242 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.14:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.14:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da805d9b6dd57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:42:53.943192919 +0000 UTC m=+0.471004943,LastTimestamp:2025-03-17 17:42:53.943192919 +0000 UTC m=+0.471004943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:42:53.959559 kubelet[2242]: I0317 17:42:53.959445 2242 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:42:53.959559 kubelet[2242]: I0317 17:42:53.959468 2242 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:42:53.959746 kubelet[2242]: I0317 17:42:53.959583 2242 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:42:53.970499 kubelet[2242]: I0317 17:42:53.970423 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:42:53.972292 kubelet[2242]: I0317 17:42:53.972221 2242 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:42:53.972479 kubelet[2242]: I0317 17:42:53.972350 2242 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:42:53.972582 kubelet[2242]: I0317 17:42:53.972545 2242 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:42:53.972631 kubelet[2242]: E0317 17:42:53.972610 2242 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:42:53.973223 kubelet[2242]: W0317 17:42:53.973176 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:53.973394 kubelet[2242]: E0317 17:42:53.973230 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:53.975626 kubelet[2242]: I0317 17:42:53.975604 2242 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:42:53.975626 kubelet[2242]: I0317 17:42:53.975622 2242 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:42:53.975801 kubelet[2242]: I0317 17:42:53.975648 2242 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:42:54.057357 kubelet[2242]: E0317 17:42:54.057285 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:54.073578 kubelet[2242]: E0317 17:42:54.073508 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:42:54.157455 kubelet[2242]: E0317 17:42:54.157418 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:54.157599 kubelet[2242]: E0317 17:42:54.157485 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="400ms" Mar 17 17:42:54.258084 kubelet[2242]: E0317 17:42:54.257999 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:54.274261 kubelet[2242]: E0317 17:42:54.274176 2242 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:42:54.284980 kubelet[2242]: I0317 17:42:54.284922 2242 policy_none.go:49] "None policy: Start" Mar 17 17:42:54.285675 kubelet[2242]: I0317 17:42:54.285651 2242 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:42:54.285727 kubelet[2242]: I0317 17:42:54.285682 2242 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:42:54.296749 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:42:54.314930 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:42:54.318282 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:42:54.329254 kubelet[2242]: I0317 17:42:54.329200 2242 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:42:54.329475 kubelet[2242]: I0317 17:42:54.329457 2242 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:42:54.329529 kubelet[2242]: I0317 17:42:54.329477 2242 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:42:54.330167 kubelet[2242]: I0317 17:42:54.329783 2242 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:42:54.330619 kubelet[2242]: E0317 17:42:54.330577 2242 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:42:54.431987 kubelet[2242]: I0317 17:42:54.431863 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:42:54.432354 kubelet[2242]: E0317 17:42:54.432321 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 17 17:42:54.559153 kubelet[2242]: E0317 17:42:54.559084 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="800ms" Mar 17 17:42:54.634473 kubelet[2242]: I0317 17:42:54.634431 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:42:54.634778 kubelet[2242]: E0317 17:42:54.634754 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 17 17:42:54.685055 systemd[1]: Created slice kubepods-burstable-pod935eb794a6b847325bf60ffc25f7baf5.slice - libcontainer container kubepods-burstable-pod935eb794a6b847325bf60ffc25f7baf5.slice. Mar 17 17:42:54.699410 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 17:42:54.710901 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 17:42:54.758314 kubelet[2242]: I0317 17:42:54.758259 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/935eb794a6b847325bf60ffc25f7baf5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"935eb794a6b847325bf60ffc25f7baf5\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:42:54.758314 kubelet[2242]: I0317 17:42:54.758312 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:54.758573 kubelet[2242]: I0317 17:42:54.758330 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:54.758573 kubelet[2242]: I0317 17:42:54.758350 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:54.758573 kubelet[2242]: I0317 17:42:54.758401 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:54.758573 kubelet[2242]: I0317 17:42:54.758435 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:42:54.758573 kubelet[2242]: I0317 17:42:54.758470 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/935eb794a6b847325bf60ffc25f7baf5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"935eb794a6b847325bf60ffc25f7baf5\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:42:54.758744 kubelet[2242]: I0317 17:42:54.758491 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/935eb794a6b847325bf60ffc25f7baf5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"935eb794a6b847325bf60ffc25f7baf5\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:42:54.758744 kubelet[2242]: I0317 17:42:54.758512 2242 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:54.810034 kubelet[2242]: W0317 17:42:54.809956 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:54.810034 kubelet[2242]: E0317 17:42:54.810024 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:54.999737 kubelet[2242]: E0317 17:42:54.999556 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:55.000567 containerd[1511]: time="2025-03-17T17:42:55.000513673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:935eb794a6b847325bf60ffc25f7baf5,Namespace:kube-system,Attempt:0,}" Mar 17 17:42:55.009762 kubelet[2242]: E0317 17:42:55.009720 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:55.010300 containerd[1511]: time="2025-03-17T17:42:55.010227650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 17:42:55.013575 kubelet[2242]: E0317 17:42:55.013538 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:55.014051 containerd[1511]: time="2025-03-17T17:42:55.013997239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 17:42:55.036615 kubelet[2242]: I0317 17:42:55.036567 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:42:55.037060 kubelet[2242]: E0317 17:42:55.036993 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 17 17:42:55.274797 kubelet[2242]: W0317 17:42:55.274566 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:55.274797 kubelet[2242]: E0317 17:42:55.274673 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:55.279546 kubelet[2242]: W0317 17:42:55.279519 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:55.279640 kubelet[2242]: E0317 17:42:55.279552 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:55.359806 kubelet[2242]: E0317 17:42:55.359732 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="1.6s" Mar 17 17:42:55.508124 kubelet[2242]: W0317 17:42:55.508016 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:55.508124 kubelet[2242]: E0317 17:42:55.508099 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:55.839053 kubelet[2242]: I0317 17:42:55.839007 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:42:55.839652 kubelet[2242]: E0317 17:42:55.839374 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 17 17:42:55.938782 kubelet[2242]: E0317 17:42:55.937485 2242 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:56.934563 kubelet[2242]: W0317 17:42:56.934451 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:56.935050 kubelet[2242]: E0317 17:42:56.934565 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:56.960951 kubelet[2242]: E0317 17:42:56.960872 2242 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.14:6443: connect: connection refused" interval="3.2s" Mar 17 17:42:57.211227 kubelet[2242]: W0317 17:42:57.211030 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:57.211227 kubelet[2242]: E0317 17:42:57.211123 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:57.441424 kubelet[2242]: I0317 17:42:57.441351 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:42:57.441693 kubelet[2242]: E0317 17:42:57.441655 2242 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.14:6443/api/v1/nodes\": dial tcp 10.0.0.14:6443: connect: connection refused" node="localhost" Mar 17 17:42:57.699005 kubelet[2242]: W0317 17:42:57.698889 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:57.699005 kubelet[2242]: E0317 17:42:57.698990 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:57.730551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855506649.mount: Deactivated successfully. Mar 17 17:42:57.736654 containerd[1511]: time="2025-03-17T17:42:57.736612955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:42:57.740489 containerd[1511]: time="2025-03-17T17:42:57.740416989Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:42:57.741659 containerd[1511]: time="2025-03-17T17:42:57.741602503Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:42:57.742528 containerd[1511]: time="2025-03-17T17:42:57.742474869Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:42:57.743543 containerd[1511]: time="2025-03-17T17:42:57.743490193Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:42:57.744360 containerd[1511]: time="2025-03-17T17:42:57.744323265Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:42:57.745391 containerd[1511]: time="2025-03-17T17:42:57.745340413Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:42:57.747319 containerd[1511]: time="2025-03-17T17:42:57.747226620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:42:57.750574 containerd[1511]: time="2025-03-17T17:42:57.750518063Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.736410486s" Mar 17 17:42:57.752129 containerd[1511]: time="2025-03-17T17:42:57.752067769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.751438108s" Mar 17 17:42:57.754461 containerd[1511]: time="2025-03-17T17:42:57.754408959Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 2.744061716s" Mar 17 17:42:58.042166 containerd[1511]: time="2025-03-17T17:42:58.040762014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:58.042166 containerd[1511]: time="2025-03-17T17:42:58.040843116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:58.042166 containerd[1511]: time="2025-03-17T17:42:58.040857382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:58.042166 containerd[1511]: time="2025-03-17T17:42:58.040993528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:58.042792 containerd[1511]: time="2025-03-17T17:42:58.039945843Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:58.042792 containerd[1511]: time="2025-03-17T17:42:58.042558032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:58.042792 containerd[1511]: time="2025-03-17T17:42:58.042581035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:58.042792 containerd[1511]: time="2025-03-17T17:42:58.042700128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:58.043018 containerd[1511]: time="2025-03-17T17:42:58.042943565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:58.043147 containerd[1511]: time="2025-03-17T17:42:58.043009468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:58.043147 containerd[1511]: time="2025-03-17T17:42:58.043026981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:58.043343 containerd[1511]: time="2025-03-17T17:42:58.043136917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:58.079495 systemd[1]: Started cri-containerd-142fed5e1b923b128fc998211ccf393f4daefde2921c77834fa1872f7ec25920.scope - libcontainer container 142fed5e1b923b128fc998211ccf393f4daefde2921c77834fa1872f7ec25920. Mar 17 17:42:58.087023 systemd[1]: Started cri-containerd-b3d85fee4ae12d5c3f4d4cab407b55ec2846039a25801cb02f1ae53fdf079ebc.scope - libcontainer container b3d85fee4ae12d5c3f4d4cab407b55ec2846039a25801cb02f1ae53fdf079ebc. Mar 17 17:42:58.099206 systemd[1]: Started cri-containerd-0a0603c0325f7e8d7e8f75566cc595c25bff83ad78a72266a128fb848a913449.scope - libcontainer container 0a0603c0325f7e8d7e8f75566cc595c25bff83ad78a72266a128fb848a913449. Mar 17 17:42:58.179615 containerd[1511]: time="2025-03-17T17:42:58.179537828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a0603c0325f7e8d7e8f75566cc595c25bff83ad78a72266a128fb848a913449\"" Mar 17 17:42:58.181307 kubelet[2242]: E0317 17:42:58.181214 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:58.183760 containerd[1511]: time="2025-03-17T17:42:58.183728928Z" level=info msg="CreateContainer within sandbox \"0a0603c0325f7e8d7e8f75566cc595c25bff83ad78a72266a128fb848a913449\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:42:58.185473 containerd[1511]: time="2025-03-17T17:42:58.185427834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:935eb794a6b847325bf60ffc25f7baf5,Namespace:kube-system,Attempt:0,} returns sandbox id \"142fed5e1b923b128fc998211ccf393f4daefde2921c77834fa1872f7ec25920\"" Mar 17 17:42:58.186200 kubelet[2242]: E0317 17:42:58.185989 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:58.187940 containerd[1511]: time="2025-03-17T17:42:58.187915068Z" level=info msg="CreateContainer within sandbox \"142fed5e1b923b128fc998211ccf393f4daefde2921c77834fa1872f7ec25920\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:42:58.204691 containerd[1511]: time="2025-03-17T17:42:58.204638290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3d85fee4ae12d5c3f4d4cab407b55ec2846039a25801cb02f1ae53fdf079ebc\"" Mar 17 17:42:58.205658 kubelet[2242]: E0317 17:42:58.205625 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:58.207338 containerd[1511]: time="2025-03-17T17:42:58.207292668Z" level=info msg="CreateContainer within sandbox \"b3d85fee4ae12d5c3f4d4cab407b55ec2846039a25801cb02f1ae53fdf079ebc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:42:58.399519 kubelet[2242]: W0317 17:42:58.399435 2242 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.14:6443: connect: connection refused Mar 17 17:42:58.399683 kubelet[2242]: E0317 17:42:58.399526 2242 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.14:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:42:58.452920 containerd[1511]: time="2025-03-17T17:42:58.452830188Z" level=info msg="CreateContainer within sandbox \"0a0603c0325f7e8d7e8f75566cc595c25bff83ad78a72266a128fb848a913449\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1974f2bbb43c57aa354e12a50f6a8712157d74ea1dba635039898d1da502f7a7\"" Mar 17 17:42:58.453703 containerd[1511]: time="2025-03-17T17:42:58.453674221Z" level=info msg="StartContainer for \"1974f2bbb43c57aa354e12a50f6a8712157d74ea1dba635039898d1da502f7a7\"" Mar 17 17:42:58.461594 containerd[1511]: time="2025-03-17T17:42:58.461503865Z" level=info msg="CreateContainer within sandbox \"b3d85fee4ae12d5c3f4d4cab407b55ec2846039a25801cb02f1ae53fdf079ebc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7691b9ef70f6caf8e36a14b039f0c1a98c86aa968f81139ec247ae7f30d46ce7\"" Mar 17 17:42:58.462380 containerd[1511]: time="2025-03-17T17:42:58.462279910Z" level=info msg="StartContainer for \"7691b9ef70f6caf8e36a14b039f0c1a98c86aa968f81139ec247ae7f30d46ce7\"" Mar 17 17:42:58.462793 containerd[1511]: time="2025-03-17T17:42:58.462752246Z" level=info msg="CreateContainer within sandbox \"142fed5e1b923b128fc998211ccf393f4daefde2921c77834fa1872f7ec25920\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e303a31fa313b64a9fe465f7488ee746cf2178acc570092faf62fafc835b9ae7\"" Mar 17 17:42:58.463260 containerd[1511]: time="2025-03-17T17:42:58.463211267Z" level=info msg="StartContainer for \"e303a31fa313b64a9fe465f7488ee746cf2178acc570092faf62fafc835b9ae7\"" Mar 17 17:42:58.489526 systemd[1]: Started cri-containerd-1974f2bbb43c57aa354e12a50f6a8712157d74ea1dba635039898d1da502f7a7.scope - libcontainer container 1974f2bbb43c57aa354e12a50f6a8712157d74ea1dba635039898d1da502f7a7. Mar 17 17:42:58.494549 systemd[1]: Started cri-containerd-7691b9ef70f6caf8e36a14b039f0c1a98c86aa968f81139ec247ae7f30d46ce7.scope - libcontainer container 7691b9ef70f6caf8e36a14b039f0c1a98c86aa968f81139ec247ae7f30d46ce7. Mar 17 17:42:58.498791 systemd[1]: Started cri-containerd-e303a31fa313b64a9fe465f7488ee746cf2178acc570092faf62fafc835b9ae7.scope - libcontainer container e303a31fa313b64a9fe465f7488ee746cf2178acc570092faf62fafc835b9ae7. Mar 17 17:42:58.561445 containerd[1511]: time="2025-03-17T17:42:58.561372975Z" level=info msg="StartContainer for \"1974f2bbb43c57aa354e12a50f6a8712157d74ea1dba635039898d1da502f7a7\" returns successfully" Mar 17 17:42:58.561445 containerd[1511]: time="2025-03-17T17:42:58.561407349Z" level=info msg="StartContainer for \"7691b9ef70f6caf8e36a14b039f0c1a98c86aa968f81139ec247ae7f30d46ce7\" returns successfully" Mar 17 17:42:58.561960 containerd[1511]: time="2025-03-17T17:42:58.561402641Z" level=info msg="StartContainer for \"e303a31fa313b64a9fe465f7488ee746cf2178acc570092faf62fafc835b9ae7\" returns successfully" Mar 17 17:42:58.988207 kubelet[2242]: E0317 17:42:58.988164 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:58.989614 kubelet[2242]: E0317 17:42:58.989581 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:58.990379 kubelet[2242]: E0317 17:42:58.990358 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:59.992461 kubelet[2242]: E0317 17:42:59.992424 2242 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:00.417664 kubelet[2242]: E0317 17:43:00.417617 2242 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 17 17:43:00.469367 kubelet[2242]: E0317 17:43:00.469221 2242 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.182da805d9b6dd57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:42:53.943192919 +0000 UTC m=+0.471004943,LastTimestamp:2025-03-17 17:42:53.943192919 +0000 UTC m=+0.471004943,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:43:00.643264 kubelet[2242]: I0317 17:43:00.643201 2242 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:43:00.649452 kubelet[2242]: I0317 17:43:00.649420 2242 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:43:00.649452 kubelet[2242]: E0317 17:43:00.649451 2242 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:43:00.657371 kubelet[2242]: E0317 17:43:00.657325 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:00.757918 kubelet[2242]: E0317 17:43:00.757757 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:00.858359 kubelet[2242]: E0317 17:43:00.858303 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:00.958868 kubelet[2242]: E0317 17:43:00.958809 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.059390 kubelet[2242]: E0317 17:43:01.059162 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.159877 kubelet[2242]: E0317 17:43:01.159809 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.260827 kubelet[2242]: E0317 17:43:01.260741 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.361711 kubelet[2242]: E0317 17:43:01.361577 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.462778 kubelet[2242]: E0317 17:43:01.462712 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.563419 kubelet[2242]: E0317 17:43:01.563334 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.664071 kubelet[2242]: E0317 17:43:01.663994 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.764731 kubelet[2242]: E0317 17:43:01.764662 2242 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:43:01.977913 kubelet[2242]: I0317 17:43:01.977651 2242 apiserver.go:52] "Watching apiserver" Mar 17 17:43:02.054302 kubelet[2242]: I0317 17:43:02.054215 2242 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:43:02.967255 systemd[1]: Reload requested from client PID 2519 ('systemctl') (unit session-7.scope)... Mar 17 17:43:02.967275 systemd[1]: Reloading... Mar 17 17:43:03.080268 zram_generator::config[2569]: No configuration found. Mar 17 17:43:03.207601 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:43:03.337689 systemd[1]: Reloading finished in 369 ms. Mar 17 17:43:03.367121 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:03.381790 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:43:03.382119 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:03.382193 systemd[1]: kubelet.service: Consumed 1.051s CPU time, 123.6M memory peak. Mar 17 17:43:03.393607 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:43:03.579170 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:43:03.584991 (kubelet)[2608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:43:03.625291 kubelet[2608]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:03.625291 kubelet[2608]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:43:03.625291 kubelet[2608]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:43:03.625896 kubelet[2608]: I0317 17:43:03.625209 2608 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:43:03.633218 kubelet[2608]: I0317 17:43:03.633155 2608 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:43:03.633218 kubelet[2608]: I0317 17:43:03.633198 2608 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:43:03.633516 kubelet[2608]: I0317 17:43:03.633486 2608 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:43:03.634963 kubelet[2608]: I0317 17:43:03.634937 2608 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:43:03.637172 kubelet[2608]: I0317 17:43:03.637133 2608 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:43:03.640874 kubelet[2608]: E0317 17:43:03.640824 2608 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:43:03.640955 kubelet[2608]: I0317 17:43:03.640875 2608 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:43:03.648148 kubelet[2608]: I0317 17:43:03.648109 2608 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:43:03.648353 kubelet[2608]: I0317 17:43:03.648303 2608 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:43:03.648522 kubelet[2608]: I0317 17:43:03.648473 2608 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:43:03.648707 kubelet[2608]: I0317 17:43:03.648513 2608 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:43:03.648821 kubelet[2608]: I0317 17:43:03.648709 2608 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:43:03.648821 kubelet[2608]: I0317 17:43:03.648719 2608 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:43:03.648904 kubelet[2608]: I0317 17:43:03.648837 2608 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:03.649044 kubelet[2608]: I0317 17:43:03.649011 2608 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:43:03.649044 kubelet[2608]: I0317 17:43:03.649034 2608 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:43:03.649102 kubelet[2608]: I0317 17:43:03.649086 2608 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:43:03.649127 kubelet[2608]: I0317 17:43:03.649109 2608 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:43:03.651047 kubelet[2608]: I0317 17:43:03.651017 2608 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:43:03.651928 kubelet[2608]: I0317 17:43:03.651896 2608 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:43:03.656724 kubelet[2608]: I0317 17:43:03.656555 2608 server.go:1269] "Started kubelet" Mar 17 17:43:03.656957 kubelet[2608]: I0317 17:43:03.656877 2608 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:43:03.658200 kubelet[2608]: I0317 17:43:03.658151 2608 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:43:03.658557 kubelet[2608]: I0317 17:43:03.658490 2608 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:43:03.658865 kubelet[2608]: I0317 17:43:03.658846 2608 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:43:03.659537 kubelet[2608]: I0317 17:43:03.659510 2608 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:43:03.661781 kubelet[2608]: I0317 17:43:03.661748 2608 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:43:03.662146 kubelet[2608]: I0317 17:43:03.662107 2608 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:43:03.662551 kubelet[2608]: I0317 17:43:03.662223 2608 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:43:03.662933 kubelet[2608]: I0317 17:43:03.662903 2608 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:43:03.664079 kubelet[2608]: E0317 17:43:03.664027 2608 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:43:03.664806 kubelet[2608]: I0317 17:43:03.664674 2608 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:43:03.666452 kubelet[2608]: I0317 17:43:03.666421 2608 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:43:03.667306 kubelet[2608]: I0317 17:43:03.666560 2608 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:43:03.674225 kubelet[2608]: I0317 17:43:03.674052 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:43:03.675429 kubelet[2608]: I0317 17:43:03.675409 2608 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:43:03.675576 kubelet[2608]: I0317 17:43:03.675559 2608 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:43:03.675690 kubelet[2608]: I0317 17:43:03.675676 2608 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:43:03.676308 kubelet[2608]: E0317 17:43:03.676280 2608 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:43:03.757758 kubelet[2608]: I0317 17:43:03.757725 2608 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:43:03.757758 kubelet[2608]: I0317 17:43:03.757745 2608 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:43:03.757758 kubelet[2608]: I0317 17:43:03.757769 2608 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:43:03.758000 kubelet[2608]: I0317 17:43:03.757930 2608 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:43:03.758000 kubelet[2608]: I0317 17:43:03.757946 2608 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:43:03.758000 kubelet[2608]: I0317 17:43:03.757965 2608 policy_none.go:49] "None policy: Start" Mar 17 17:43:03.758622 kubelet[2608]: I0317 17:43:03.758597 2608 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:43:03.758683 kubelet[2608]: I0317 17:43:03.758634 2608 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:43:03.758879 kubelet[2608]: I0317 17:43:03.758851 2608 state_mem.go:75] "Updated machine memory state" Mar 17 17:43:03.763337 kubelet[2608]: I0317 17:43:03.763320 2608 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:43:03.763628 kubelet[2608]: I0317 17:43:03.763504 2608 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:43:03.763628 kubelet[2608]: I0317 17:43:03.763516 2608 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:43:03.763752 kubelet[2608]: I0317 17:43:03.763738 2608 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:43:03.863368 kubelet[2608]: I0317 17:43:03.863329 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/935eb794a6b847325bf60ffc25f7baf5-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"935eb794a6b847325bf60ffc25f7baf5\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:43:03.863368 kubelet[2608]: I0317 17:43:03.863363 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/935eb794a6b847325bf60ffc25f7baf5-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"935eb794a6b847325bf60ffc25f7baf5\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:43:03.863368 kubelet[2608]: I0317 17:43:03.863384 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:43:03.863566 kubelet[2608]: I0317 17:43:03.863399 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:43:03.863566 kubelet[2608]: I0317 17:43:03.863416 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:43:03.863566 kubelet[2608]: I0317 17:43:03.863447 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/935eb794a6b847325bf60ffc25f7baf5-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"935eb794a6b847325bf60ffc25f7baf5\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:43:03.863566 kubelet[2608]: I0317 17:43:03.863465 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:43:03.863566 kubelet[2608]: I0317 17:43:03.863537 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:43:03.863689 kubelet[2608]: I0317 17:43:03.863603 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:43:03.871414 kubelet[2608]: I0317 17:43:03.871378 2608 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:43:03.877561 kubelet[2608]: I0317 17:43:03.877465 2608 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 17:43:03.877561 kubelet[2608]: I0317 17:43:03.877536 2608 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:43:04.111322 kubelet[2608]: E0317 17:43:04.111267 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:04.111573 kubelet[2608]: E0317 17:43:04.111538 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:04.111745 kubelet[2608]: E0317 17:43:04.111714 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:04.652544 kubelet[2608]: I0317 17:43:04.652470 2608 apiserver.go:52] "Watching apiserver" Mar 17 17:43:04.663165 kubelet[2608]: I0317 17:43:04.663012 2608 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:43:04.733283 kubelet[2608]: E0317 17:43:04.732231 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:04.733283 kubelet[2608]: E0317 17:43:04.732928 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:04.739783 kubelet[2608]: E0317 17:43:04.739738 2608 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:43:04.740545 kubelet[2608]: E0317 17:43:04.740405 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:04.887961 kubelet[2608]: I0317 17:43:04.885749 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.885730152 podStartE2EDuration="1.885730152s" podCreationTimestamp="2025-03-17 17:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:04.876760685 +0000 UTC m=+1.286120498" watchObservedRunningTime="2025-03-17 17:43:04.885730152 +0000 UTC m=+1.295089965" Mar 17 17:43:04.896695 kubelet[2608]: I0317 17:43:04.896612 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8965903819999999 podStartE2EDuration="1.896590382s" podCreationTimestamp="2025-03-17 17:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:04.886020729 +0000 UTC m=+1.295380542" watchObservedRunningTime="2025-03-17 17:43:04.896590382 +0000 UTC m=+1.305950195" Mar 17 17:43:04.907618 kubelet[2608]: I0317 17:43:04.907394 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.90734063 podStartE2EDuration="1.90734063s" podCreationTimestamp="2025-03-17 17:43:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:04.896826805 +0000 UTC m=+1.306186618" watchObservedRunningTime="2025-03-17 17:43:04.90734063 +0000 UTC m=+1.316700443" Mar 17 17:43:05.733435 kubelet[2608]: E0317 17:43:05.733394 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:05.733946 kubelet[2608]: E0317 17:43:05.733909 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:07.037280 kubelet[2608]: E0317 17:43:07.037204 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:07.272403 systemd[1]: Created slice kubepods-besteffort-pod2d71053a_3ebb_463c_b163_80d48dea8d1d.slice - libcontainer container kubepods-besteffort-pod2d71053a_3ebb_463c_b163_80d48dea8d1d.slice. Mar 17 17:43:07.325933 kubelet[2608]: I0317 17:43:07.325788 2608 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:43:07.326149 containerd[1511]: time="2025-03-17T17:43:07.326109473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:43:07.326517 kubelet[2608]: I0317 17:43:07.326291 2608 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:43:07.376514 kubelet[2608]: I0317 17:43:07.376442 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d71053a-3ebb-463c-b163-80d48dea8d1d-kube-proxy\") pod \"kube-proxy-gwnlv\" (UID: \"2d71053a-3ebb-463c-b163-80d48dea8d1d\") " pod="kube-system/kube-proxy-gwnlv" Mar 17 17:43:07.376514 kubelet[2608]: I0317 17:43:07.376519 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d71053a-3ebb-463c-b163-80d48dea8d1d-xtables-lock\") pod \"kube-proxy-gwnlv\" (UID: \"2d71053a-3ebb-463c-b163-80d48dea8d1d\") " pod="kube-system/kube-proxy-gwnlv" Mar 17 17:43:07.376675 kubelet[2608]: I0317 17:43:07.376543 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d71053a-3ebb-463c-b163-80d48dea8d1d-lib-modules\") pod \"kube-proxy-gwnlv\" (UID: \"2d71053a-3ebb-463c-b163-80d48dea8d1d\") " pod="kube-system/kube-proxy-gwnlv" Mar 17 17:43:07.376675 kubelet[2608]: I0317 17:43:07.376563 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrphb\" (UniqueName: \"kubernetes.io/projected/2d71053a-3ebb-463c-b163-80d48dea8d1d-kube-api-access-xrphb\") pod \"kube-proxy-gwnlv\" (UID: \"2d71053a-3ebb-463c-b163-80d48dea8d1d\") " pod="kube-system/kube-proxy-gwnlv" Mar 17 17:43:07.483169 kubelet[2608]: E0317 17:43:07.483133 2608 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 17 17:43:07.483169 kubelet[2608]: E0317 17:43:07.483161 2608 projected.go:194] Error preparing data for projected volume kube-api-access-xrphb for pod kube-system/kube-proxy-gwnlv: configmap "kube-root-ca.crt" not found Mar 17 17:43:07.483411 kubelet[2608]: E0317 17:43:07.483232 2608 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2d71053a-3ebb-463c-b163-80d48dea8d1d-kube-api-access-xrphb podName:2d71053a-3ebb-463c-b163-80d48dea8d1d nodeName:}" failed. No retries permitted until 2025-03-17 17:43:07.983197314 +0000 UTC m=+4.392557117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xrphb" (UniqueName: "kubernetes.io/projected/2d71053a-3ebb-463c-b163-80d48dea8d1d-kube-api-access-xrphb") pod "kube-proxy-gwnlv" (UID: "2d71053a-3ebb-463c-b163-80d48dea8d1d") : configmap "kube-root-ca.crt" not found Mar 17 17:43:08.182413 kubelet[2608]: E0317 17:43:08.182354 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:08.186569 containerd[1511]: time="2025-03-17T17:43:08.186488938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gwnlv,Uid:2d71053a-3ebb-463c-b163-80d48dea8d1d,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:08.224553 containerd[1511]: time="2025-03-17T17:43:08.224397333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:08.224851 containerd[1511]: time="2025-03-17T17:43:08.224499438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:08.224851 containerd[1511]: time="2025-03-17T17:43:08.224526459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:08.226015 containerd[1511]: time="2025-03-17T17:43:08.225825416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:08.248667 systemd[1]: Started cri-containerd-ae734bdc7b837cdd7b61f4daf6a52ab5fb6b8b509d4cd1a383e1599eef68b9dd.scope - libcontainer container ae734bdc7b837cdd7b61f4daf6a52ab5fb6b8b509d4cd1a383e1599eef68b9dd. Mar 17 17:43:08.271371 containerd[1511]: time="2025-03-17T17:43:08.271314557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gwnlv,Uid:2d71053a-3ebb-463c-b163-80d48dea8d1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae734bdc7b837cdd7b61f4daf6a52ab5fb6b8b509d4cd1a383e1599eef68b9dd\"" Mar 17 17:43:08.272510 kubelet[2608]: E0317 17:43:08.272481 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:08.275078 containerd[1511]: time="2025-03-17T17:43:08.275024501Z" level=info msg="CreateContainer within sandbox \"ae734bdc7b837cdd7b61f4daf6a52ab5fb6b8b509d4cd1a383e1599eef68b9dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:43:08.299393 containerd[1511]: time="2025-03-17T17:43:08.299327342Z" level=info msg="CreateContainer within sandbox \"ae734bdc7b837cdd7b61f4daf6a52ab5fb6b8b509d4cd1a383e1599eef68b9dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1e9501c7fc96535c73cdd02f7094b970c8be667a4b4d2d53be2c929de490b11c\"" Mar 17 17:43:08.300100 containerd[1511]: time="2025-03-17T17:43:08.300061844Z" level=info msg="StartContainer for \"1e9501c7fc96535c73cdd02f7094b970c8be667a4b4d2d53be2c929de490b11c\"" Mar 17 17:43:08.338603 systemd[1]: Started cri-containerd-1e9501c7fc96535c73cdd02f7094b970c8be667a4b4d2d53be2c929de490b11c.scope - libcontainer container 1e9501c7fc96535c73cdd02f7094b970c8be667a4b4d2d53be2c929de490b11c. Mar 17 17:43:08.386817 containerd[1511]: time="2025-03-17T17:43:08.386762507Z" level=info msg="StartContainer for \"1e9501c7fc96535c73cdd02f7094b970c8be667a4b4d2d53be2c929de490b11c\" returns successfully" Mar 17 17:43:08.398628 systemd[1]: Created slice kubepods-besteffort-pod658fc924_de0b_4aa6_bd51_320c46b585e6.slice - libcontainer container kubepods-besteffort-pod658fc924_de0b_4aa6_bd51_320c46b585e6.slice. Mar 17 17:43:08.584063 kubelet[2608]: I0317 17:43:08.583881 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psz4d\" (UniqueName: \"kubernetes.io/projected/658fc924-de0b-4aa6-bd51-320c46b585e6-kube-api-access-psz4d\") pod \"tigera-operator-64ff5465b7-gqvgj\" (UID: \"658fc924-de0b-4aa6-bd51-320c46b585e6\") " pod="tigera-operator/tigera-operator-64ff5465b7-gqvgj" Mar 17 17:43:08.584063 kubelet[2608]: I0317 17:43:08.583935 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/658fc924-de0b-4aa6-bd51-320c46b585e6-var-lib-calico\") pod \"tigera-operator-64ff5465b7-gqvgj\" (UID: \"658fc924-de0b-4aa6-bd51-320c46b585e6\") " pod="tigera-operator/tigera-operator-64ff5465b7-gqvgj" Mar 17 17:43:08.703124 containerd[1511]: time="2025-03-17T17:43:08.703065107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-gqvgj,Uid:658fc924-de0b-4aa6-bd51-320c46b585e6,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:43:08.741337 kubelet[2608]: E0317 17:43:08.740036 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:08.745186 containerd[1511]: time="2025-03-17T17:43:08.744865984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:08.745186 containerd[1511]: time="2025-03-17T17:43:08.744932811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:08.745186 containerd[1511]: time="2025-03-17T17:43:08.744946828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:08.745186 containerd[1511]: time="2025-03-17T17:43:08.745054794Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:08.755061 kubelet[2608]: I0317 17:43:08.754970 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gwnlv" podStartSLOduration=1.7549474269999998 podStartE2EDuration="1.754947427s" podCreationTimestamp="2025-03-17 17:43:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:08.754710697 +0000 UTC m=+5.164070510" watchObservedRunningTime="2025-03-17 17:43:08.754947427 +0000 UTC m=+5.164307231" Mar 17 17:43:08.771653 systemd[1]: Started cri-containerd-7d6a99b97dce6ba088be7f243d0f96ccb378a6f2f961f26152d611da0dac98b3.scope - libcontainer container 7d6a99b97dce6ba088be7f243d0f96ccb378a6f2f961f26152d611da0dac98b3. Mar 17 17:43:08.820076 containerd[1511]: time="2025-03-17T17:43:08.819936301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-gqvgj,Uid:658fc924-de0b-4aa6-bd51-320c46b585e6,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7d6a99b97dce6ba088be7f243d0f96ccb378a6f2f961f26152d611da0dac98b3\"" Mar 17 17:43:08.824723 containerd[1511]: time="2025-03-17T17:43:08.824645832Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:43:08.914452 sudo[1694]: pam_unix(sudo:session): session closed for user root Mar 17 17:43:08.916400 sshd[1693]: Connection closed by 10.0.0.1 port 49538 Mar 17 17:43:08.917071 sshd-session[1689]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:08.921942 systemd[1]: sshd@6-10.0.0.14:22-10.0.0.1:49538.service: Deactivated successfully. Mar 17 17:43:08.924435 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:43:08.924670 systemd[1]: session-7.scope: Consumed 5.513s CPU time, 210.9M memory peak. Mar 17 17:43:08.925981 systemd-logind[1494]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:43:08.926989 systemd-logind[1494]: Removed session 7. Mar 17 17:43:09.981660 kubelet[2608]: E0317 17:43:09.981597 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:10.744544 kubelet[2608]: E0317 17:43:10.744486 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:12.708894 update_engine[1498]: I20250317 17:43:12.708814 1498 update_attempter.cc:509] Updating boot flags... Mar 17 17:43:12.792274 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2953) Mar 17 17:43:12.868332 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2954) Mar 17 17:43:12.949281 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 42 scanned by (udev-worker) (2954) Mar 17 17:43:15.285158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount87916423.mount: Deactivated successfully. Mar 17 17:43:15.608153 kubelet[2608]: E0317 17:43:15.607665 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:16.176847 containerd[1511]: time="2025-03-17T17:43:16.176755314Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:16.184557 containerd[1511]: time="2025-03-17T17:43:16.184455259Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 17 17:43:16.191440 containerd[1511]: time="2025-03-17T17:43:16.191346902Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:16.199378 containerd[1511]: time="2025-03-17T17:43:16.199306840Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:16.200426 containerd[1511]: time="2025-03-17T17:43:16.200360305Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 7.375631697s" Mar 17 17:43:16.200426 containerd[1511]: time="2025-03-17T17:43:16.200422363Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 17 17:43:16.203191 containerd[1511]: time="2025-03-17T17:43:16.203145501Z" level=info msg="CreateContainer within sandbox \"7d6a99b97dce6ba088be7f243d0f96ccb378a6f2f961f26152d611da0dac98b3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:43:16.445913 containerd[1511]: time="2025-03-17T17:43:16.445729360Z" level=info msg="CreateContainer within sandbox \"7d6a99b97dce6ba088be7f243d0f96ccb378a6f2f961f26152d611da0dac98b3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9af48eb5a3ddeee36b7e246e670d7dd93fff6f6e7d7359b056c5f6b9e6e304bb\"" Mar 17 17:43:16.446528 containerd[1511]: time="2025-03-17T17:43:16.446482226Z" level=info msg="StartContainer for \"9af48eb5a3ddeee36b7e246e670d7dd93fff6f6e7d7359b056c5f6b9e6e304bb\"" Mar 17 17:43:16.478525 systemd[1]: Started cri-containerd-9af48eb5a3ddeee36b7e246e670d7dd93fff6f6e7d7359b056c5f6b9e6e304bb.scope - libcontainer container 9af48eb5a3ddeee36b7e246e670d7dd93fff6f6e7d7359b056c5f6b9e6e304bb. Mar 17 17:43:16.545607 containerd[1511]: time="2025-03-17T17:43:16.545535647Z" level=info msg="StartContainer for \"9af48eb5a3ddeee36b7e246e670d7dd93fff6f6e7d7359b056c5f6b9e6e304bb\" returns successfully" Mar 17 17:43:16.764716 kubelet[2608]: I0317 17:43:16.764540 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-gqvgj" podStartSLOduration=1.385855921 podStartE2EDuration="8.764519805s" podCreationTimestamp="2025-03-17 17:43:08 +0000 UTC" firstStartedPulling="2025-03-17 17:43:08.822743134 +0000 UTC m=+5.232102947" lastFinishedPulling="2025-03-17 17:43:16.201407028 +0000 UTC m=+12.610766831" observedRunningTime="2025-03-17 17:43:16.764449682 +0000 UTC m=+13.173809525" watchObservedRunningTime="2025-03-17 17:43:16.764519805 +0000 UTC m=+13.173879628" Mar 17 17:43:17.041841 kubelet[2608]: E0317 17:43:17.041694 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:19.356587 systemd[1]: Created slice kubepods-besteffort-podd655b896_0dfb_4fb1_aa8d_660e846e83c4.slice - libcontainer container kubepods-besteffort-podd655b896_0dfb_4fb1_aa8d_660e846e83c4.slice. Mar 17 17:43:19.420094 systemd[1]: Created slice kubepods-besteffort-poda3a60e31_9c2d_4283_976d_e6a92c69ec09.slice - libcontainer container kubepods-besteffort-poda3a60e31_9c2d_4283_976d_e6a92c69ec09.slice. Mar 17 17:43:19.452520 kubelet[2608]: I0317 17:43:19.452447 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d655b896-0dfb-4fb1-aa8d-660e846e83c4-typha-certs\") pod \"calico-typha-5f877c6479-fdsgz\" (UID: \"d655b896-0dfb-4fb1-aa8d-660e846e83c4\") " pod="calico-system/calico-typha-5f877c6479-fdsgz" Mar 17 17:43:19.453035 kubelet[2608]: I0317 17:43:19.452502 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrzrj\" (UniqueName: \"kubernetes.io/projected/d655b896-0dfb-4fb1-aa8d-660e846e83c4-kube-api-access-xrzrj\") pod \"calico-typha-5f877c6479-fdsgz\" (UID: \"d655b896-0dfb-4fb1-aa8d-660e846e83c4\") " pod="calico-system/calico-typha-5f877c6479-fdsgz" Mar 17 17:43:19.453035 kubelet[2608]: I0317 17:43:19.452599 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d655b896-0dfb-4fb1-aa8d-660e846e83c4-tigera-ca-bundle\") pod \"calico-typha-5f877c6479-fdsgz\" (UID: \"d655b896-0dfb-4fb1-aa8d-660e846e83c4\") " pod="calico-system/calico-typha-5f877c6479-fdsgz" Mar 17 17:43:19.522939 kubelet[2608]: E0317 17:43:19.522546 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:19.553724 kubelet[2608]: I0317 17:43:19.553650 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3a60e31-9c2d-4283-976d-e6a92c69ec09-tigera-ca-bundle\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.553724 kubelet[2608]: I0317 17:43:19.553724 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8ed5b12a-6d88-43a8-8215-c1e4e9724067-socket-dir\") pod \"csi-node-driver-j8ss7\" (UID: \"8ed5b12a-6d88-43a8-8215-c1e4e9724067\") " pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:19.553945 kubelet[2608]: I0317 17:43:19.553755 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-lib-modules\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.553945 kubelet[2608]: I0317 17:43:19.553791 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-var-lib-calico\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.553945 kubelet[2608]: I0317 17:43:19.553812 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8ed5b12a-6d88-43a8-8215-c1e4e9724067-registration-dir\") pod \"csi-node-driver-j8ss7\" (UID: \"8ed5b12a-6d88-43a8-8215-c1e4e9724067\") " pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:19.553945 kubelet[2608]: I0317 17:43:19.553840 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-cni-log-dir\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.553945 kubelet[2608]: I0317 17:43:19.553883 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3a60e31-9c2d-4283-976d-e6a92c69ec09-node-certs\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.554113 kubelet[2608]: I0317 17:43:19.553962 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-cni-net-dir\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.554113 kubelet[2608]: I0317 17:43:19.553994 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8ed5b12a-6d88-43a8-8215-c1e4e9724067-varrun\") pod \"csi-node-driver-j8ss7\" (UID: \"8ed5b12a-6d88-43a8-8215-c1e4e9724067\") " pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:19.554113 kubelet[2608]: I0317 17:43:19.554038 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-xtables-lock\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.554113 kubelet[2608]: I0317 17:43:19.554064 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-policysync\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.554113 kubelet[2608]: I0317 17:43:19.554085 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-flexvol-driver-host\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.554329 kubelet[2608]: I0317 17:43:19.554109 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w89mp\" (UniqueName: \"kubernetes.io/projected/a3a60e31-9c2d-4283-976d-e6a92c69ec09-kube-api-access-w89mp\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.554329 kubelet[2608]: I0317 17:43:19.554136 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-cni-bin-dir\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.554329 kubelet[2608]: I0317 17:43:19.554168 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8ed5b12a-6d88-43a8-8215-c1e4e9724067-kubelet-dir\") pod \"csi-node-driver-j8ss7\" (UID: \"8ed5b12a-6d88-43a8-8215-c1e4e9724067\") " pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:19.554329 kubelet[2608]: I0317 17:43:19.554204 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwgml\" (UniqueName: \"kubernetes.io/projected/8ed5b12a-6d88-43a8-8215-c1e4e9724067-kube-api-access-gwgml\") pod \"csi-node-driver-j8ss7\" (UID: \"8ed5b12a-6d88-43a8-8215-c1e4e9724067\") " pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:19.554329 kubelet[2608]: I0317 17:43:19.554279 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3a60e31-9c2d-4283-976d-e6a92c69ec09-var-run-calico\") pod \"calico-node-hkj5m\" (UID: \"a3a60e31-9c2d-4283-976d-e6a92c69ec09\") " pod="calico-system/calico-node-hkj5m" Mar 17 17:43:19.656739 kubelet[2608]: E0317 17:43:19.656217 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.656739 kubelet[2608]: W0317 17:43:19.656261 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.656739 kubelet[2608]: E0317 17:43:19.656311 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.656739 kubelet[2608]: E0317 17:43:19.656549 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.656739 kubelet[2608]: W0317 17:43:19.656557 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.656739 kubelet[2608]: E0317 17:43:19.656579 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.657107 kubelet[2608]: E0317 17:43:19.656784 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.657107 kubelet[2608]: W0317 17:43:19.656792 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.657107 kubelet[2608]: E0317 17:43:19.656812 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.657107 kubelet[2608]: E0317 17:43:19.657066 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.657107 kubelet[2608]: W0317 17:43:19.657079 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.657312 kubelet[2608]: E0317 17:43:19.657110 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.657642 kubelet[2608]: E0317 17:43:19.657624 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.657642 kubelet[2608]: W0317 17:43:19.657635 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.657703 kubelet[2608]: E0317 17:43:19.657663 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.658541 kubelet[2608]: E0317 17:43:19.657882 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.658541 kubelet[2608]: W0317 17:43:19.657894 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.658541 kubelet[2608]: E0317 17:43:19.657924 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.658541 kubelet[2608]: E0317 17:43:19.658301 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.658541 kubelet[2608]: W0317 17:43:19.658310 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.658541 kubelet[2608]: E0317 17:43:19.658349 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.658785 kubelet[2608]: E0317 17:43:19.658551 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.658785 kubelet[2608]: W0317 17:43:19.658559 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.658785 kubelet[2608]: E0317 17:43:19.658691 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.659262 kubelet[2608]: E0317 17:43:19.658917 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.659262 kubelet[2608]: W0317 17:43:19.658930 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.659262 kubelet[2608]: E0317 17:43:19.659028 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.659353 kubelet[2608]: E0317 17:43:19.659285 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.659353 kubelet[2608]: W0317 17:43:19.659294 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.659396 kubelet[2608]: E0317 17:43:19.659381 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.659610 kubelet[2608]: E0317 17:43:19.659574 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.659610 kubelet[2608]: W0317 17:43:19.659590 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.659695 kubelet[2608]: E0317 17:43:19.659677 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.659957 kubelet[2608]: E0317 17:43:19.659927 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.659957 kubelet[2608]: W0317 17:43:19.659939 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.660070 kubelet[2608]: E0317 17:43:19.660047 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.660630 kubelet[2608]: E0317 17:43:19.660579 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.660630 kubelet[2608]: W0317 17:43:19.660614 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.660722 kubelet[2608]: E0317 17:43:19.660702 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.660943 kubelet[2608]: E0317 17:43:19.660851 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.660943 kubelet[2608]: W0317 17:43:19.660867 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.662306 kubelet[2608]: E0317 17:43:19.661106 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.662306 kubelet[2608]: E0317 17:43:19.661110 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.662533 kubelet[2608]: W0317 17:43:19.661387 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.663827 kubelet[2608]: E0317 17:43:19.662796 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:19.664291 kubelet[2608]: E0317 17:43:19.664265 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.664291 kubelet[2608]: W0317 17:43:19.664286 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.664495 kubelet[2608]: E0317 17:43:19.664471 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.664495 kubelet[2608]: W0317 17:43:19.664488 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.665106 kubelet[2608]: E0317 17:43:19.664644 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.665106 kubelet[2608]: W0317 17:43:19.664657 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.665106 kubelet[2608]: E0317 17:43:19.664788 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.665106 kubelet[2608]: E0317 17:43:19.664803 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.665106 kubelet[2608]: E0317 17:43:19.664813 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.665106 kubelet[2608]: E0317 17:43:19.664824 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.665106 kubelet[2608]: E0317 17:43:19.664886 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.665106 kubelet[2608]: W0317 17:43:19.664899 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.665106 kubelet[2608]: E0317 17:43:19.665004 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.665430 containerd[1511]: time="2025-03-17T17:43:19.664780456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f877c6479-fdsgz,Uid:d655b896-0dfb-4fb1-aa8d-660e846e83c4,Namespace:calico-system,Attempt:0,}" Mar 17 17:43:19.666512 kubelet[2608]: E0317 17:43:19.665338 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.666512 kubelet[2608]: W0317 17:43:19.665352 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.666512 kubelet[2608]: E0317 17:43:19.665455 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.666512 kubelet[2608]: E0317 17:43:19.665594 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.666512 kubelet[2608]: W0317 17:43:19.665604 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.666512 kubelet[2608]: E0317 17:43:19.665932 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.666512 kubelet[2608]: W0317 17:43:19.665943 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.666512 kubelet[2608]: E0317 17:43:19.666408 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.666512 kubelet[2608]: E0317 17:43:19.666465 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.666733 kubelet[2608]: E0317 17:43:19.666605 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.666733 kubelet[2608]: W0317 17:43:19.666614 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.667357 kubelet[2608]: E0317 17:43:19.667290 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.669513 kubelet[2608]: E0317 17:43:19.669490 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.669513 kubelet[2608]: W0317 17:43:19.669509 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.669617 kubelet[2608]: E0317 17:43:19.669564 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.669900 kubelet[2608]: E0317 17:43:19.669814 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.669900 kubelet[2608]: W0317 17:43:19.669823 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.669900 kubelet[2608]: E0317 17:43:19.669878 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.670312 kubelet[2608]: E0317 17:43:19.670126 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.670312 kubelet[2608]: W0317 17:43:19.670134 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.670312 kubelet[2608]: E0317 17:43:19.670247 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.670651 kubelet[2608]: E0317 17:43:19.670411 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.670651 kubelet[2608]: W0317 17:43:19.670423 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.670651 kubelet[2608]: E0317 17:43:19.670479 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.671064 kubelet[2608]: E0317 17:43:19.671048 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.671064 kubelet[2608]: W0317 17:43:19.671060 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.671134 kubelet[2608]: E0317 17:43:19.671109 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.671436 kubelet[2608]: E0317 17:43:19.671407 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.671436 kubelet[2608]: W0317 17:43:19.671418 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.671591 kubelet[2608]: E0317 17:43:19.671568 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.671834 kubelet[2608]: E0317 17:43:19.671816 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.671834 kubelet[2608]: W0317 17:43:19.671830 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.672042 kubelet[2608]: E0317 17:43:19.672027 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.672153 kubelet[2608]: W0317 17:43:19.672096 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.672276 kubelet[2608]: E0317 17:43:19.672029 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.672276 kubelet[2608]: E0317 17:43:19.672231 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.672628 kubelet[2608]: E0317 17:43:19.672532 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.672628 kubelet[2608]: W0317 17:43:19.672545 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.672779 kubelet[2608]: E0317 17:43:19.672731 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.673686 kubelet[2608]: E0317 17:43:19.673581 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.673686 kubelet[2608]: W0317 17:43:19.673595 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.673686 kubelet[2608]: E0317 17:43:19.673626 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.673857 kubelet[2608]: E0317 17:43:19.673845 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.673968 kubelet[2608]: W0317 17:43:19.673954 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.674286 kubelet[2608]: E0317 17:43:19.674264 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.674756 kubelet[2608]: E0317 17:43:19.674567 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.674756 kubelet[2608]: W0317 17:43:19.674590 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.674756 kubelet[2608]: E0317 17:43:19.674632 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.674890 kubelet[2608]: E0317 17:43:19.674866 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.674890 kubelet[2608]: W0317 17:43:19.674886 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.674960 kubelet[2608]: E0317 17:43:19.674943 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.675221 kubelet[2608]: E0317 17:43:19.675191 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.675221 kubelet[2608]: W0317 17:43:19.675206 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.675536 kubelet[2608]: E0317 17:43:19.675361 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.675795 kubelet[2608]: E0317 17:43:19.675777 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.675795 kubelet[2608]: W0317 17:43:19.675791 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.675851 kubelet[2608]: E0317 17:43:19.675833 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.676125 kubelet[2608]: E0317 17:43:19.676108 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.676125 kubelet[2608]: W0317 17:43:19.676121 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.676276 kubelet[2608]: E0317 17:43:19.676232 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.676442 kubelet[2608]: E0317 17:43:19.676426 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.676442 kubelet[2608]: W0317 17:43:19.676438 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.676622 kubelet[2608]: E0317 17:43:19.676558 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.676674 kubelet[2608]: E0317 17:43:19.676657 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.676674 kubelet[2608]: W0317 17:43:19.676671 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.676876 kubelet[2608]: E0317 17:43:19.676783 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.676925 kubelet[2608]: E0317 17:43:19.676901 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.676925 kubelet[2608]: W0317 17:43:19.676910 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.677075 kubelet[2608]: E0317 17:43:19.677055 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.677888 kubelet[2608]: E0317 17:43:19.677608 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.677888 kubelet[2608]: W0317 17:43:19.677623 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.677888 kubelet[2608]: E0317 17:43:19.677634 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.682532 kubelet[2608]: E0317 17:43:19.682500 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:19.682532 kubelet[2608]: W0317 17:43:19.682525 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:19.682663 kubelet[2608]: E0317 17:43:19.682544 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:19.699648 containerd[1511]: time="2025-03-17T17:43:19.699304705Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:19.699648 containerd[1511]: time="2025-03-17T17:43:19.699374537Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:19.699648 containerd[1511]: time="2025-03-17T17:43:19.699397551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:19.699648 containerd[1511]: time="2025-03-17T17:43:19.699479786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:19.719396 systemd[1]: Started cri-containerd-24ace7056f5ad0fd01d22f11331152563b1e6e5cc748fa0d982783159a4b17f0.scope - libcontainer container 24ace7056f5ad0fd01d22f11331152563b1e6e5cc748fa0d982783159a4b17f0. Mar 17 17:43:19.725132 kubelet[2608]: E0317 17:43:19.724834 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:19.725655 containerd[1511]: time="2025-03-17T17:43:19.725317261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkj5m,Uid:a3a60e31-9c2d-4283-976d-e6a92c69ec09,Namespace:calico-system,Attempt:0,}" Mar 17 17:43:19.759615 containerd[1511]: time="2025-03-17T17:43:19.759540111Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5f877c6479-fdsgz,Uid:d655b896-0dfb-4fb1-aa8d-660e846e83c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"24ace7056f5ad0fd01d22f11331152563b1e6e5cc748fa0d982783159a4b17f0\"" Mar 17 17:43:19.760880 kubelet[2608]: E0317 17:43:19.760555 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:19.761756 containerd[1511]: time="2025-03-17T17:43:19.761640753Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:43:20.003929 containerd[1511]: time="2025-03-17T17:43:20.003719301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:20.003929 containerd[1511]: time="2025-03-17T17:43:20.003784134Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:20.003929 containerd[1511]: time="2025-03-17T17:43:20.003797970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:20.003929 containerd[1511]: time="2025-03-17T17:43:20.003908609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:20.025424 systemd[1]: Started cri-containerd-2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7.scope - libcontainer container 2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7. Mar 17 17:43:20.049779 containerd[1511]: time="2025-03-17T17:43:20.049732681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hkj5m,Uid:a3a60e31-9c2d-4283-976d-e6a92c69ec09,Namespace:calico-system,Attempt:0,} returns sandbox id \"2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7\"" Mar 17 17:43:20.050418 kubelet[2608]: E0317 17:43:20.050395 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:20.676781 kubelet[2608]: E0317 17:43:20.676718 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:22.676594 kubelet[2608]: E0317 17:43:22.676511 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:22.980733 containerd[1511]: time="2025-03-17T17:43:22.980580183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:22.981505 containerd[1511]: time="2025-03-17T17:43:22.981377618Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 17 17:43:22.982469 containerd[1511]: time="2025-03-17T17:43:22.982427000Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:22.984476 containerd[1511]: time="2025-03-17T17:43:22.984439350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:22.985044 containerd[1511]: time="2025-03-17T17:43:22.985009336Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 3.223336533s" Mar 17 17:43:22.985044 containerd[1511]: time="2025-03-17T17:43:22.985036266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 17 17:43:22.986016 containerd[1511]: time="2025-03-17T17:43:22.985983465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:43:22.993868 containerd[1511]: time="2025-03-17T17:43:22.993807658Z" level=info msg="CreateContainer within sandbox \"24ace7056f5ad0fd01d22f11331152563b1e6e5cc748fa0d982783159a4b17f0\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:43:23.010965 containerd[1511]: time="2025-03-17T17:43:23.010929207Z" level=info msg="CreateContainer within sandbox \"24ace7056f5ad0fd01d22f11331152563b1e6e5cc748fa0d982783159a4b17f0\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f965da7b9f022b6447707c5c0a70cbe0fe22465be3e55c379ca863331a4d9c5f\"" Mar 17 17:43:23.011301 containerd[1511]: time="2025-03-17T17:43:23.011275051Z" level=info msg="StartContainer for \"f965da7b9f022b6447707c5c0a70cbe0fe22465be3e55c379ca863331a4d9c5f\"" Mar 17 17:43:23.047706 systemd[1]: Started cri-containerd-f965da7b9f022b6447707c5c0a70cbe0fe22465be3e55c379ca863331a4d9c5f.scope - libcontainer container f965da7b9f022b6447707c5c0a70cbe0fe22465be3e55c379ca863331a4d9c5f. Mar 17 17:43:23.090775 containerd[1511]: time="2025-03-17T17:43:23.090722543Z" level=info msg="StartContainer for \"f965da7b9f022b6447707c5c0a70cbe0fe22465be3e55c379ca863331a4d9c5f\" returns successfully" Mar 17 17:43:23.769756 kubelet[2608]: E0317 17:43:23.769714 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:23.783270 kubelet[2608]: E0317 17:43:23.783204 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.783712 kubelet[2608]: W0317 17:43:23.783375 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.783712 kubelet[2608]: E0317 17:43:23.783522 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.783882 kubelet[2608]: E0317 17:43:23.783845 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.783882 kubelet[2608]: W0317 17:43:23.783861 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.783882 kubelet[2608]: E0317 17:43:23.783873 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.784158 kubelet[2608]: E0317 17:43:23.784134 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.784158 kubelet[2608]: W0317 17:43:23.784149 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.784158 kubelet[2608]: E0317 17:43:23.784160 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.784442 kubelet[2608]: E0317 17:43:23.784414 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.784442 kubelet[2608]: W0317 17:43:23.784435 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.784442 kubelet[2608]: E0317 17:43:23.784445 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.784668 kubelet[2608]: E0317 17:43:23.784653 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.784668 kubelet[2608]: W0317 17:43:23.784664 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.784770 kubelet[2608]: E0317 17:43:23.784673 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.784887 kubelet[2608]: E0317 17:43:23.784873 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.784887 kubelet[2608]: W0317 17:43:23.784883 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.784963 kubelet[2608]: E0317 17:43:23.784892 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.785092 kubelet[2608]: E0317 17:43:23.785078 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.785092 kubelet[2608]: W0317 17:43:23.785087 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.785204 kubelet[2608]: E0317 17:43:23.785095 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.785367 kubelet[2608]: E0317 17:43:23.785349 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.785367 kubelet[2608]: W0317 17:43:23.785361 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.785453 kubelet[2608]: E0317 17:43:23.785369 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.785582 kubelet[2608]: E0317 17:43:23.785566 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.785630 kubelet[2608]: W0317 17:43:23.785586 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.785630 kubelet[2608]: E0317 17:43:23.785594 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.785791 kubelet[2608]: E0317 17:43:23.785776 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.785791 kubelet[2608]: W0317 17:43:23.785788 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.785883 kubelet[2608]: E0317 17:43:23.785796 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.785990 kubelet[2608]: E0317 17:43:23.785976 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.785990 kubelet[2608]: W0317 17:43:23.785985 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.786051 kubelet[2608]: E0317 17:43:23.785993 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.786220 kubelet[2608]: E0317 17:43:23.786203 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.786220 kubelet[2608]: W0317 17:43:23.786214 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.786321 kubelet[2608]: E0317 17:43:23.786222 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.786439 kubelet[2608]: E0317 17:43:23.786425 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.786439 kubelet[2608]: W0317 17:43:23.786435 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.786503 kubelet[2608]: E0317 17:43:23.786444 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.786626 kubelet[2608]: E0317 17:43:23.786614 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.786626 kubelet[2608]: W0317 17:43:23.786623 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.786696 kubelet[2608]: E0317 17:43:23.786630 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.786819 kubelet[2608]: E0317 17:43:23.786806 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.786819 kubelet[2608]: W0317 17:43:23.786815 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.786874 kubelet[2608]: E0317 17:43:23.786822 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.883685 kubelet[2608]: E0317 17:43:23.883647 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.883685 kubelet[2608]: W0317 17:43:23.883673 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.883869 kubelet[2608]: E0317 17:43:23.883696 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.884026 kubelet[2608]: E0317 17:43:23.884013 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.884026 kubelet[2608]: W0317 17:43:23.884024 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.884087 kubelet[2608]: E0317 17:43:23.884038 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.884359 kubelet[2608]: E0317 17:43:23.884328 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.884403 kubelet[2608]: W0317 17:43:23.884359 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.884403 kubelet[2608]: E0317 17:43:23.884381 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.884617 kubelet[2608]: E0317 17:43:23.884602 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.884657 kubelet[2608]: W0317 17:43:23.884617 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.884657 kubelet[2608]: E0317 17:43:23.884632 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.884889 kubelet[2608]: E0317 17:43:23.884870 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.884949 kubelet[2608]: W0317 17:43:23.884894 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.884949 kubelet[2608]: E0317 17:43:23.884912 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.886096 kubelet[2608]: E0317 17:43:23.885304 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.886096 kubelet[2608]: W0317 17:43:23.885317 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.886096 kubelet[2608]: E0317 17:43:23.885332 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.886096 kubelet[2608]: E0317 17:43:23.885703 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.886096 kubelet[2608]: W0317 17:43:23.885728 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.886096 kubelet[2608]: E0317 17:43:23.885766 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.886096 kubelet[2608]: E0317 17:43:23.886029 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.886096 kubelet[2608]: W0317 17:43:23.886039 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.886096 kubelet[2608]: E0317 17:43:23.886055 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.886402 kubelet[2608]: E0317 17:43:23.886258 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.886402 kubelet[2608]: W0317 17:43:23.886266 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.886402 kubelet[2608]: E0317 17:43:23.886279 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.886506 kubelet[2608]: E0317 17:43:23.886486 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.886506 kubelet[2608]: W0317 17:43:23.886495 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.886575 kubelet[2608]: E0317 17:43:23.886508 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.886720 kubelet[2608]: E0317 17:43:23.886704 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.886818 kubelet[2608]: W0317 17:43:23.886717 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.886818 kubelet[2608]: E0317 17:43:23.886808 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.887068 kubelet[2608]: E0317 17:43:23.886968 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.887068 kubelet[2608]: W0317 17:43:23.886981 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.887201 kubelet[2608]: E0317 17:43:23.887171 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.887379 kubelet[2608]: E0317 17:43:23.887356 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.887379 kubelet[2608]: W0317 17:43:23.887368 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.887429 kubelet[2608]: E0317 17:43:23.887389 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.887663 kubelet[2608]: E0317 17:43:23.887644 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.887663 kubelet[2608]: W0317 17:43:23.887661 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.887745 kubelet[2608]: E0317 17:43:23.887680 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.887918 kubelet[2608]: E0317 17:43:23.887901 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.887918 kubelet[2608]: W0317 17:43:23.887914 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.888000 kubelet[2608]: E0317 17:43:23.887928 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.888225 kubelet[2608]: E0317 17:43:23.888206 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.888225 kubelet[2608]: W0317 17:43:23.888219 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.888225 kubelet[2608]: E0317 17:43:23.888234 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.888589 kubelet[2608]: E0317 17:43:23.888557 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.888589 kubelet[2608]: W0317 17:43:23.888581 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.888687 kubelet[2608]: E0317 17:43:23.888604 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:23.888896 kubelet[2608]: E0317 17:43:23.888874 2608 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:43:23.888896 kubelet[2608]: W0317 17:43:23.888889 2608 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:43:23.888896 kubelet[2608]: E0317 17:43:23.888901 2608 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:43:24.519231 containerd[1511]: time="2025-03-17T17:43:24.519174284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:24.519903 containerd[1511]: time="2025-03-17T17:43:24.519840522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 17 17:43:24.520806 containerd[1511]: time="2025-03-17T17:43:24.520773752Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:24.522784 containerd[1511]: time="2025-03-17T17:43:24.522753076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:24.523408 containerd[1511]: time="2025-03-17T17:43:24.523374528Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.537355505s" Mar 17 17:43:24.523408 containerd[1511]: time="2025-03-17T17:43:24.523404235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:43:24.525152 containerd[1511]: time="2025-03-17T17:43:24.525087821Z" level=info msg="CreateContainer within sandbox \"2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:43:24.545097 containerd[1511]: time="2025-03-17T17:43:24.545018671Z" level=info msg="CreateContainer within sandbox \"2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903\"" Mar 17 17:43:24.546579 containerd[1511]: time="2025-03-17T17:43:24.545804213Z" level=info msg="StartContainer for \"ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903\"" Mar 17 17:43:24.579387 systemd[1]: Started cri-containerd-ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903.scope - libcontainer container ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903. Mar 17 17:43:24.615853 containerd[1511]: time="2025-03-17T17:43:24.615800025Z" level=info msg="StartContainer for \"ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903\" returns successfully" Mar 17 17:43:24.630520 systemd[1]: cri-containerd-ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903.scope: Deactivated successfully. Mar 17 17:43:24.677311 kubelet[2608]: E0317 17:43:24.677168 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:24.772809 kubelet[2608]: I0317 17:43:24.772389 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:43:24.772809 kubelet[2608]: E0317 17:43:24.772643 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:24.772809 kubelet[2608]: E0317 17:43:24.772650 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:24.842829 kubelet[2608]: I0317 17:43:24.842572 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5f877c6479-fdsgz" podStartSLOduration=2.6179295959999997 podStartE2EDuration="5.842550675s" podCreationTimestamp="2025-03-17 17:43:19 +0000 UTC" firstStartedPulling="2025-03-17 17:43:19.76122137 +0000 UTC m=+16.170581183" lastFinishedPulling="2025-03-17 17:43:22.985842459 +0000 UTC m=+19.395202262" observedRunningTime="2025-03-17 17:43:23.779506355 +0000 UTC m=+20.188866158" watchObservedRunningTime="2025-03-17 17:43:24.842550675 +0000 UTC m=+21.251910489" Mar 17 17:43:24.904407 containerd[1511]: time="2025-03-17T17:43:24.904317303Z" level=info msg="shim disconnected" id=ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903 namespace=k8s.io Mar 17 17:43:24.904407 containerd[1511]: time="2025-03-17T17:43:24.904397003Z" level=warning msg="cleaning up after shim disconnected" id=ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903 namespace=k8s.io Mar 17 17:43:24.904407 containerd[1511]: time="2025-03-17T17:43:24.904409697Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:43:24.991740 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccb6e0d2c27e5f5d811007877a1b08eff6a5e4b4df08e20cd8ce637304f9a903-rootfs.mount: Deactivated successfully. Mar 17 17:43:25.777151 kubelet[2608]: E0317 17:43:25.777108 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:25.777770 containerd[1511]: time="2025-03-17T17:43:25.777657996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:43:26.676413 kubelet[2608]: E0317 17:43:26.676350 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:28.676734 kubelet[2608]: E0317 17:43:28.676670 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:30.474404 containerd[1511]: time="2025-03-17T17:43:30.474306356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:30.476464 containerd[1511]: time="2025-03-17T17:43:30.476400470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:43:30.477553 containerd[1511]: time="2025-03-17T17:43:30.477220113Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:30.481223 containerd[1511]: time="2025-03-17T17:43:30.481174909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:30.481896 containerd[1511]: time="2025-03-17T17:43:30.481862134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 4.704168592s" Mar 17 17:43:30.481974 containerd[1511]: time="2025-03-17T17:43:30.481894114Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:43:30.504093 containerd[1511]: time="2025-03-17T17:43:30.504038531Z" level=info msg="CreateContainer within sandbox \"2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:43:30.524922 containerd[1511]: time="2025-03-17T17:43:30.524866630Z" level=info msg="CreateContainer within sandbox \"2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138\"" Mar 17 17:43:30.528103 containerd[1511]: time="2025-03-17T17:43:30.528071535Z" level=info msg="StartContainer for \"d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138\"" Mar 17 17:43:30.569602 systemd[1]: Started cri-containerd-d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138.scope - libcontainer container d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138. Mar 17 17:43:30.622668 containerd[1511]: time="2025-03-17T17:43:30.622614417Z" level=info msg="StartContainer for \"d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138\" returns successfully" Mar 17 17:43:30.694837 kubelet[2608]: E0317 17:43:30.694658 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:30.794175 kubelet[2608]: E0317 17:43:30.794028 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:31.796120 kubelet[2608]: E0317 17:43:31.796068 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:32.202173 systemd[1]: cri-containerd-d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138.scope: Deactivated successfully. Mar 17 17:43:32.202543 systemd[1]: cri-containerd-d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138.scope: Consumed 586ms CPU time, 163M memory peak, 4K read from disk, 154M written to disk. Mar 17 17:43:32.222420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138-rootfs.mount: Deactivated successfully. Mar 17 17:43:32.237759 kubelet[2608]: I0317 17:43:32.237707 2608 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:43:32.339259 kubelet[2608]: I0317 17:43:32.338303 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b74841bb-3d21-4734-85da-f48ab60f9d98-calico-apiserver-certs\") pod \"calico-apiserver-6cbddc9666-fgfrs\" (UID: \"b74841bb-3d21-4734-85da-f48ab60f9d98\") " pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:32.339503 kubelet[2608]: I0317 17:43:32.339446 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwtn6\" (UniqueName: \"kubernetes.io/projected/b74841bb-3d21-4734-85da-f48ab60f9d98-kube-api-access-zwtn6\") pod \"calico-apiserver-6cbddc9666-fgfrs\" (UID: \"b74841bb-3d21-4734-85da-f48ab60f9d98\") " pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:32.340276 containerd[1511]: time="2025-03-17T17:43:32.340008248Z" level=info msg="shim disconnected" id=d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138 namespace=k8s.io Mar 17 17:43:32.340608 containerd[1511]: time="2025-03-17T17:43:32.340325665Z" level=warning msg="cleaning up after shim disconnected" id=d01d36069e2b08f43cc849a1da074bbcccefd38efe9222dc26484bc7bce92138 namespace=k8s.io Mar 17 17:43:32.340608 containerd[1511]: time="2025-03-17T17:43:32.340387972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:43:32.348922 systemd[1]: Created slice kubepods-burstable-pod9de23c76_70f0_4fa7_aa26_f471719ff480.slice - libcontainer container kubepods-burstable-pod9de23c76_70f0_4fa7_aa26_f471719ff480.slice. Mar 17 17:43:32.356887 systemd[1]: Created slice kubepods-besteffort-podbb999740_3eb7_4d5d_b75d_a6ed26b4fcf7.slice - libcontainer container kubepods-besteffort-podbb999740_3eb7_4d5d_b75d_a6ed26b4fcf7.slice. Mar 17 17:43:32.367524 systemd[1]: Created slice kubepods-besteffort-podc27ab8c8_4886_4cfa_ac38_ef82b827b394.slice - libcontainer container kubepods-besteffort-podc27ab8c8_4886_4cfa_ac38_ef82b827b394.slice. Mar 17 17:43:32.374845 systemd[1]: Created slice kubepods-besteffort-podb74841bb_3d21_4734_85da_f48ab60f9d98.slice - libcontainer container kubepods-besteffort-podb74841bb_3d21_4734_85da_f48ab60f9d98.slice. Mar 17 17:43:32.384817 systemd[1]: Created slice kubepods-burstable-pod92c87a7e_fef7_4c26_ab3b_4e94dca0e582.slice - libcontainer container kubepods-burstable-pod92c87a7e_fef7_4c26_ab3b_4e94dca0e582.slice. Mar 17 17:43:32.440372 kubelet[2608]: I0317 17:43:32.440303 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7l8r\" (UniqueName: \"kubernetes.io/projected/c27ab8c8-4886-4cfa-ac38-ef82b827b394-kube-api-access-j7l8r\") pod \"calico-apiserver-6cbddc9666-8jnnv\" (UID: \"c27ab8c8-4886-4cfa-ac38-ef82b827b394\") " pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:32.440372 kubelet[2608]: I0317 17:43:32.440366 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92c87a7e-fef7-4c26-ab3b-4e94dca0e582-config-volume\") pod \"coredns-6f6b679f8f-wp522\" (UID: \"92c87a7e-fef7-4c26-ab3b-4e94dca0e582\") " pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:32.440572 kubelet[2608]: I0317 17:43:32.440393 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9de23c76-70f0-4fa7-aa26-f471719ff480-config-volume\") pod \"coredns-6f6b679f8f-qmtlq\" (UID: \"9de23c76-70f0-4fa7-aa26-f471719ff480\") " pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:32.440572 kubelet[2608]: I0317 17:43:32.440418 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c27ab8c8-4886-4cfa-ac38-ef82b827b394-calico-apiserver-certs\") pod \"calico-apiserver-6cbddc9666-8jnnv\" (UID: \"c27ab8c8-4886-4cfa-ac38-ef82b827b394\") " pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:32.440572 kubelet[2608]: I0317 17:43:32.440460 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htnzf\" (UniqueName: \"kubernetes.io/projected/9de23c76-70f0-4fa7-aa26-f471719ff480-kube-api-access-htnzf\") pod \"coredns-6f6b679f8f-qmtlq\" (UID: \"9de23c76-70f0-4fa7-aa26-f471719ff480\") " pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:32.440776 kubelet[2608]: I0317 17:43:32.440701 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfg9v\" (UniqueName: \"kubernetes.io/projected/bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7-kube-api-access-wfg9v\") pod \"calico-kube-controllers-86f7c466bd-bf4pr\" (UID: \"bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7\") " pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:32.440776 kubelet[2608]: I0317 17:43:32.440735 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7-tigera-ca-bundle\") pod \"calico-kube-controllers-86f7c466bd-bf4pr\" (UID: \"bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7\") " pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:32.440776 kubelet[2608]: I0317 17:43:32.440756 2608 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfxj\" (UniqueName: \"kubernetes.io/projected/92c87a7e-fef7-4c26-ab3b-4e94dca0e582-kube-api-access-qdfxj\") pod \"coredns-6f6b679f8f-wp522\" (UID: \"92c87a7e-fef7-4c26-ab3b-4e94dca0e582\") " pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:32.653807 kubelet[2608]: E0317 17:43:32.653751 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:32.654912 containerd[1511]: time="2025-03-17T17:43:32.654843352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:32.663718 containerd[1511]: time="2025-03-17T17:43:32.663673641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:0,}" Mar 17 17:43:32.671634 containerd[1511]: time="2025-03-17T17:43:32.671593547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:43:32.679387 containerd[1511]: time="2025-03-17T17:43:32.679344404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:43:32.682455 systemd[1]: Created slice kubepods-besteffort-pod8ed5b12a_6d88_43a8_8215_c1e4e9724067.slice - libcontainer container kubepods-besteffort-pod8ed5b12a_6d88_43a8_8215_c1e4e9724067.slice. Mar 17 17:43:32.684304 containerd[1511]: time="2025-03-17T17:43:32.684253914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:0,}" Mar 17 17:43:32.688667 kubelet[2608]: E0317 17:43:32.688640 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:32.689060 containerd[1511]: time="2025-03-17T17:43:32.689037267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:0,}" Mar 17 17:43:32.799855 kubelet[2608]: E0317 17:43:32.799803 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:32.801177 containerd[1511]: time="2025-03-17T17:43:32.801111225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:43:33.062730 containerd[1511]: time="2025-03-17T17:43:33.062472462Z" level=error msg="Failed to destroy network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.064174 containerd[1511]: time="2025-03-17T17:43:33.063996170Z" level=error msg="encountered an error cleaning up failed sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.064174 containerd[1511]: time="2025-03-17T17:43:33.064113731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.065440 kubelet[2608]: E0317 17:43:33.065023 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.065440 kubelet[2608]: E0317 17:43:33.065118 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:33.065440 kubelet[2608]: E0317 17:43:33.065147 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:33.065594 kubelet[2608]: E0317 17:43:33.065204 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" podUID="b74841bb-3d21-4734-85da-f48ab60f9d98" Mar 17 17:43:33.085919 containerd[1511]: time="2025-03-17T17:43:33.085858005Z" level=error msg="Failed to destroy network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.086675 containerd[1511]: time="2025-03-17T17:43:33.086620080Z" level=error msg="encountered an error cleaning up failed sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.086814 containerd[1511]: time="2025-03-17T17:43:33.086771124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.087931 kubelet[2608]: E0317 17:43:33.087077 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.087931 kubelet[2608]: E0317 17:43:33.087147 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:33.087931 kubelet[2608]: E0317 17:43:33.087168 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:33.088071 kubelet[2608]: E0317 17:43:33.087212 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qmtlq" podUID="9de23c76-70f0-4fa7-aa26-f471719ff480" Mar 17 17:43:33.090868 systemd[1]: Started sshd@7-10.0.0.14:22-10.0.0.1:44184.service - OpenSSH per-connection server daemon (10.0.0.1:44184). Mar 17 17:43:33.100895 containerd[1511]: time="2025-03-17T17:43:33.100815550Z" level=error msg="Failed to destroy network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.102899 containerd[1511]: time="2025-03-17T17:43:33.101636114Z" level=error msg="Failed to destroy network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.102899 containerd[1511]: time="2025-03-17T17:43:33.102347603Z" level=error msg="encountered an error cleaning up failed sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.102899 containerd[1511]: time="2025-03-17T17:43:33.102401775Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.103279 containerd[1511]: time="2025-03-17T17:43:33.103207060Z" level=error msg="Failed to destroy network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.103376 kubelet[2608]: E0317 17:43:33.103221 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.103586 containerd[1511]: time="2025-03-17T17:43:33.103458383Z" level=error msg="encountered an error cleaning up failed sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.103586 containerd[1511]: time="2025-03-17T17:43:33.103529617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.103686 kubelet[2608]: E0317 17:43:33.103479 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:33.103686 kubelet[2608]: E0317 17:43:33.103502 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:33.103686 kubelet[2608]: E0317 17:43:33.103548 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" podUID="bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7" Mar 17 17:43:33.104608 kubelet[2608]: E0317 17:43:33.104488 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.104608 kubelet[2608]: E0317 17:43:33.104520 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:33.104608 kubelet[2608]: E0317 17:43:33.104534 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:33.104703 kubelet[2608]: E0317 17:43:33.104561 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:33.105481 containerd[1511]: time="2025-03-17T17:43:33.105449139Z" level=error msg="encountered an error cleaning up failed sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.105531 containerd[1511]: time="2025-03-17T17:43:33.105506728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.106557 kubelet[2608]: E0317 17:43:33.105798 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.106557 kubelet[2608]: E0317 17:43:33.106406 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:33.106557 kubelet[2608]: E0317 17:43:33.106428 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:33.106711 kubelet[2608]: E0317 17:43:33.106488 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" podUID="c27ab8c8-4886-4cfa-ac38-ef82b827b394" Mar 17 17:43:33.107426 containerd[1511]: time="2025-03-17T17:43:33.107395822Z" level=error msg="Failed to destroy network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.107796 containerd[1511]: time="2025-03-17T17:43:33.107772110Z" level=error msg="encountered an error cleaning up failed sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.107872 containerd[1511]: time="2025-03-17T17:43:33.107817926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.108109 kubelet[2608]: E0317 17:43:33.108061 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.108273 kubelet[2608]: E0317 17:43:33.108102 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:33.108273 kubelet[2608]: E0317 17:43:33.108141 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:33.108273 kubelet[2608]: E0317 17:43:33.108176 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wp522" podUID="92c87a7e-fef7-4c26-ab3b-4e94dca0e582" Mar 17 17:43:33.130078 sshd[3609]: Accepted publickey for core from 10.0.0.1 port 44184 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:43:33.131663 sshd-session[3609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:33.136480 systemd-logind[1494]: New session 8 of user core. Mar 17 17:43:33.144386 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:43:33.268123 sshd[3616]: Connection closed by 10.0.0.1 port 44184 Mar 17 17:43:33.268572 sshd-session[3609]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:33.273222 systemd[1]: sshd@7-10.0.0.14:22-10.0.0.1:44184.service: Deactivated successfully. Mar 17 17:43:33.275571 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:43:33.276370 systemd-logind[1494]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:43:33.277319 systemd-logind[1494]: Removed session 8. Mar 17 17:43:33.802629 kubelet[2608]: I0317 17:43:33.802584 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6" Mar 17 17:43:33.803210 containerd[1511]: time="2025-03-17T17:43:33.803172866Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:43:33.805143 containerd[1511]: time="2025-03-17T17:43:33.803457271Z" level=info msg="Ensure that sandbox a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6 in task-service has been cleanup successfully" Mar 17 17:43:33.805143 containerd[1511]: time="2025-03-17T17:43:33.804088819Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:43:33.805143 containerd[1511]: time="2025-03-17T17:43:33.804113385Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:43:33.805143 containerd[1511]: time="2025-03-17T17:43:33.804537223Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:43:33.805143 containerd[1511]: time="2025-03-17T17:43:33.804781193Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:43:33.805333 kubelet[2608]: I0317 17:43:33.803568 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52" Mar 17 17:43:33.805549 containerd[1511]: time="2025-03-17T17:43:33.805481891Z" level=info msg="Ensure that sandbox c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52 in task-service has been cleanup successfully" Mar 17 17:43:33.805937 containerd[1511]: time="2025-03-17T17:43:33.805804648Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:43:33.805937 containerd[1511]: time="2025-03-17T17:43:33.805826409Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:43:33.806644 kubelet[2608]: E0317 17:43:33.806211 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:33.807326 containerd[1511]: time="2025-03-17T17:43:33.806473687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:1,}" Mar 17 17:43:33.807369 kubelet[2608]: I0317 17:43:33.807290 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290" Mar 17 17:43:33.807872 containerd[1511]: time="2025-03-17T17:43:33.807820301Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:43:33.808082 containerd[1511]: time="2025-03-17T17:43:33.808053730Z" level=info msg="Ensure that sandbox b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290 in task-service has been cleanup successfully" Mar 17 17:43:33.808331 systemd[1]: run-netns-cni\x2da952efa1\x2d0382\x2d21d6\x2d7c97\x2dc21fa576fae2.mount: Deactivated successfully. Mar 17 17:43:33.809601 kubelet[2608]: I0317 17:43:33.808711 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446" Mar 17 17:43:33.809673 containerd[1511]: time="2025-03-17T17:43:33.808711788Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:43:33.809673 containerd[1511]: time="2025-03-17T17:43:33.808732958Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:43:33.810209 containerd[1511]: time="2025-03-17T17:43:33.810088799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:1,}" Mar 17 17:43:33.810913 containerd[1511]: time="2025-03-17T17:43:33.810636569Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:43:33.811054 containerd[1511]: time="2025-03-17T17:43:33.810898352Z" level=info msg="Ensure that sandbox 4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446 in task-service has been cleanup successfully" Mar 17 17:43:33.813071 containerd[1511]: time="2025-03-17T17:43:33.811335375Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:43:33.813071 containerd[1511]: time="2025-03-17T17:43:33.811359961Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:43:33.813071 containerd[1511]: time="2025-03-17T17:43:33.812650649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:43:33.813195 kubelet[2608]: I0317 17:43:33.811988 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0" Mar 17 17:43:33.813805 systemd[1]: run-netns-cni\x2d5417ba7c\x2d471e\x2d0b69\x2da164\x2dc05675cdf00a.mount: Deactivated successfully. Mar 17 17:43:33.813968 containerd[1511]: time="2025-03-17T17:43:33.813816312Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:43:33.814057 containerd[1511]: time="2025-03-17T17:43:33.814021429Z" level=info msg="Ensure that sandbox 22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0 in task-service has been cleanup successfully" Mar 17 17:43:33.814320 systemd[1]: run-netns-cni\x2dc0332a5e\x2de1c4\x2d3ee1\x2d3e3a\x2d6ef324792757.mount: Deactivated successfully. Mar 17 17:43:33.814640 systemd[1]: run-netns-cni\x2d59638db4\x2d2f79\x2d9ecf\x2d70fd\x2d497802582a7d.mount: Deactivated successfully. Mar 17 17:43:33.814751 containerd[1511]: time="2025-03-17T17:43:33.814720544Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:43:33.814751 containerd[1511]: time="2025-03-17T17:43:33.814741764Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:43:33.815342 kubelet[2608]: I0317 17:43:33.814983 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38" Mar 17 17:43:33.816144 containerd[1511]: time="2025-03-17T17:43:33.816103066Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:1,}" Mar 17 17:43:33.816436 containerd[1511]: time="2025-03-17T17:43:33.816377022Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:43:33.816636 containerd[1511]: time="2025-03-17T17:43:33.816581646Z" level=info msg="Ensure that sandbox 605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38 in task-service has been cleanup successfully" Mar 17 17:43:33.817129 containerd[1511]: time="2025-03-17T17:43:33.817011486Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:43:33.817129 containerd[1511]: time="2025-03-17T17:43:33.817040570Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:43:33.818337 kubelet[2608]: E0317 17:43:33.818171 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:33.819050 containerd[1511]: time="2025-03-17T17:43:33.819032267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:1,}" Mar 17 17:43:33.819793 systemd[1]: run-netns-cni\x2d97302e69\x2dfc6b\x2d48f6\x2dab1b\x2dd26cc47bfb78.mount: Deactivated successfully. Mar 17 17:43:33.819941 systemd[1]: run-netns-cni\x2d784d6e45\x2da4aa\x2d5d89\x2d1cb5\x2db6a34b10366e.mount: Deactivated successfully. Mar 17 17:43:33.952107 containerd[1511]: time="2025-03-17T17:43:33.952048485Z" level=error msg="Failed to destroy network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.952637 containerd[1511]: time="2025-03-17T17:43:33.952613999Z" level=error msg="encountered an error cleaning up failed sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.953099 containerd[1511]: time="2025-03-17T17:43:33.952913131Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.953533 kubelet[2608]: E0317 17:43:33.953492 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.953597 kubelet[2608]: E0317 17:43:33.953561 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:33.953597 kubelet[2608]: E0317 17:43:33.953581 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:33.953658 kubelet[2608]: E0317 17:43:33.953629 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" podUID="b74841bb-3d21-4734-85da-f48ab60f9d98" Mar 17 17:43:33.969925 containerd[1511]: time="2025-03-17T17:43:33.969842737Z" level=error msg="Failed to destroy network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.970847 containerd[1511]: time="2025-03-17T17:43:33.970732751Z" level=error msg="encountered an error cleaning up failed sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.970847 containerd[1511]: time="2025-03-17T17:43:33.970803715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.972338 kubelet[2608]: E0317 17:43:33.971897 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.972338 kubelet[2608]: E0317 17:43:33.971971 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:33.972338 kubelet[2608]: E0317 17:43:33.971993 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:33.972464 kubelet[2608]: E0317 17:43:33.972043 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:33.981171 containerd[1511]: time="2025-03-17T17:43:33.981113244Z" level=error msg="Failed to destroy network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.982151 containerd[1511]: time="2025-03-17T17:43:33.982127853Z" level=error msg="encountered an error cleaning up failed sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.982302 containerd[1511]: time="2025-03-17T17:43:33.982283516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.982679 kubelet[2608]: E0317 17:43:33.982620 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.982731 kubelet[2608]: E0317 17:43:33.982711 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:33.982755 kubelet[2608]: E0317 17:43:33.982737 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:33.982878 kubelet[2608]: E0317 17:43:33.982801 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qmtlq" podUID="9de23c76-70f0-4fa7-aa26-f471719ff480" Mar 17 17:43:33.984861 containerd[1511]: time="2025-03-17T17:43:33.984809799Z" level=error msg="Failed to destroy network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.985146 containerd[1511]: time="2025-03-17T17:43:33.985085799Z" level=error msg="Failed to destroy network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.986036 containerd[1511]: time="2025-03-17T17:43:33.986004847Z" level=error msg="encountered an error cleaning up failed sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.986113 containerd[1511]: time="2025-03-17T17:43:33.986069830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.986203 containerd[1511]: time="2025-03-17T17:43:33.986074679Z" level=error msg="encountered an error cleaning up failed sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.986203 containerd[1511]: time="2025-03-17T17:43:33.986175829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.986385 kubelet[2608]: E0317 17:43:33.986354 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.986430 kubelet[2608]: E0317 17:43:33.986408 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:33.986462 kubelet[2608]: E0317 17:43:33.986433 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:33.986520 kubelet[2608]: E0317 17:43:33.986477 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" podUID="c27ab8c8-4886-4cfa-ac38-ef82b827b394" Mar 17 17:43:33.987276 kubelet[2608]: E0317 17:43:33.986352 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.987276 kubelet[2608]: E0317 17:43:33.986698 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:33.987276 kubelet[2608]: E0317 17:43:33.986717 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:33.987432 kubelet[2608]: E0317 17:43:33.986748 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wp522" podUID="92c87a7e-fef7-4c26-ab3b-4e94dca0e582" Mar 17 17:43:33.993015 containerd[1511]: time="2025-03-17T17:43:33.992974211Z" level=error msg="Failed to destroy network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.993375 containerd[1511]: time="2025-03-17T17:43:33.993348146Z" level=error msg="encountered an error cleaning up failed sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.993419 containerd[1511]: time="2025-03-17T17:43:33.993399382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.993620 kubelet[2608]: E0317 17:43:33.993581 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:33.993706 kubelet[2608]: E0317 17:43:33.993645 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:33.993706 kubelet[2608]: E0317 17:43:33.993670 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:33.993772 kubelet[2608]: E0317 17:43:33.993724 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" podUID="bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7" Mar 17 17:43:34.819204 kubelet[2608]: I0317 17:43:34.819156 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3" Mar 17 17:43:34.819933 containerd[1511]: time="2025-03-17T17:43:34.819859191Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:43:34.820362 containerd[1511]: time="2025-03-17T17:43:34.820336148Z" level=info msg="Ensure that sandbox 5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3 in task-service has been cleanup successfully" Mar 17 17:43:34.825301 containerd[1511]: time="2025-03-17T17:43:34.823360697Z" level=info msg="TearDown network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" successfully" Mar 17 17:43:34.825301 containerd[1511]: time="2025-03-17T17:43:34.823423075Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" returns successfully" Mar 17 17:43:34.825301 containerd[1511]: time="2025-03-17T17:43:34.824634905Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:43:34.825301 containerd[1511]: time="2025-03-17T17:43:34.824751775Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:43:34.825301 containerd[1511]: time="2025-03-17T17:43:34.824768266Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:43:34.825530 kubelet[2608]: I0317 17:43:34.824516 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf" Mar 17 17:43:34.825530 kubelet[2608]: E0317 17:43:34.825099 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:34.825438 systemd[1]: run-netns-cni\x2d96162eaf\x2d2ce8\x2dfca6\x2d92dc\x2d934ba8ea82fc.mount: Deactivated successfully. Mar 17 17:43:34.825996 containerd[1511]: time="2025-03-17T17:43:34.825564323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:2,}" Mar 17 17:43:34.825996 containerd[1511]: time="2025-03-17T17:43:34.825589972Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:43:34.825996 containerd[1511]: time="2025-03-17T17:43:34.825888733Z" level=info msg="Ensure that sandbox b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf in task-service has been cleanup successfully" Mar 17 17:43:34.826750 containerd[1511]: time="2025-03-17T17:43:34.826716561Z" level=info msg="TearDown network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" successfully" Mar 17 17:43:34.826750 containerd[1511]: time="2025-03-17T17:43:34.826743040Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" returns successfully" Mar 17 17:43:34.829945 kubelet[2608]: I0317 17:43:34.827829 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b" Mar 17 17:43:34.830029 containerd[1511]: time="2025-03-17T17:43:34.828440193Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:43:34.830029 containerd[1511]: time="2025-03-17T17:43:34.828697257Z" level=info msg="Ensure that sandbox e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b in task-service has been cleanup successfully" Mar 17 17:43:34.830029 containerd[1511]: time="2025-03-17T17:43:34.828955092Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:43:34.830029 containerd[1511]: time="2025-03-17T17:43:34.829049700Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:43:34.830029 containerd[1511]: time="2025-03-17T17:43:34.829063927Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:43:34.830029 containerd[1511]: time="2025-03-17T17:43:34.829704602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:2,}" Mar 17 17:43:34.830415 systemd[1]: run-netns-cni\x2dba55fd00\x2d7854\x2df031\x2d6ff5\x2d8de54c8c3b1f.mount: Deactivated successfully. Mar 17 17:43:34.830791 containerd[1511]: time="2025-03-17T17:43:34.830759577Z" level=info msg="TearDown network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" successfully" Mar 17 17:43:34.830855 containerd[1511]: time="2025-03-17T17:43:34.830789172Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" returns successfully" Mar 17 17:43:34.831896 kubelet[2608]: I0317 17:43:34.831855 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541" Mar 17 17:43:34.832325 containerd[1511]: time="2025-03-17T17:43:34.832093286Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:43:34.832325 containerd[1511]: time="2025-03-17T17:43:34.832194386Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:43:34.832325 containerd[1511]: time="2025-03-17T17:43:34.832207451Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.832834881Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.832970276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.833098717Z" level=info msg="Ensure that sandbox c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541 in task-service has been cleanup successfully" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.833351372Z" level=info msg="TearDown network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" successfully" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.833365399Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" returns successfully" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.833824322Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.833937996Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:43:34.834274 containerd[1511]: time="2025-03-17T17:43:34.833952724Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:43:34.834710 containerd[1511]: time="2025-03-17T17:43:34.834678900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:2,}" Mar 17 17:43:34.835052 kubelet[2608]: I0317 17:43:34.835010 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd" Mar 17 17:43:34.835582 containerd[1511]: time="2025-03-17T17:43:34.835560779Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:43:34.836563 containerd[1511]: time="2025-03-17T17:43:34.835884408Z" level=info msg="Ensure that sandbox 2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd in task-service has been cleanup successfully" Mar 17 17:43:34.836061 systemd[1]: run-netns-cni\x2db278460e\x2d2225\x2d161e\x2d171e\x2d050a0d25786f.mount: Deactivated successfully. Mar 17 17:43:34.836274 systemd[1]: run-netns-cni\x2d94350bb1\x2da816\x2dce95\x2d118d\x2d2df4b1948f32.mount: Deactivated successfully. Mar 17 17:43:34.836858 containerd[1511]: time="2025-03-17T17:43:34.836824547Z" level=info msg="TearDown network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" successfully" Mar 17 17:43:34.836858 containerd[1511]: time="2025-03-17T17:43:34.836843843Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" returns successfully" Mar 17 17:43:34.838281 containerd[1511]: time="2025-03-17T17:43:34.837525645Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:43:34.838281 containerd[1511]: time="2025-03-17T17:43:34.837629992Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:43:34.838281 containerd[1511]: time="2025-03-17T17:43:34.837645261Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:43:34.840538 systemd[1]: run-netns-cni\x2dee20de40\x2dc719\x2d04c5\x2d45e2\x2db8fa32294c5b.mount: Deactivated successfully. Mar 17 17:43:34.843755 kubelet[2608]: I0317 17:43:34.843719 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501" Mar 17 17:43:34.844480 containerd[1511]: time="2025-03-17T17:43:34.844442197Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:43:34.844849 containerd[1511]: time="2025-03-17T17:43:34.844696235Z" level=info msg="Ensure that sandbox a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501 in task-service has been cleanup successfully" Mar 17 17:43:34.844941 containerd[1511]: time="2025-03-17T17:43:34.844759744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:43:34.845015 containerd[1511]: time="2025-03-17T17:43:34.844952397Z" level=info msg="TearDown network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" successfully" Mar 17 17:43:34.845015 containerd[1511]: time="2025-03-17T17:43:34.844970541Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" returns successfully" Mar 17 17:43:34.845332 containerd[1511]: time="2025-03-17T17:43:34.845304369Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:43:34.845782 containerd[1511]: time="2025-03-17T17:43:34.845435605Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:43:34.845782 containerd[1511]: time="2025-03-17T17:43:34.845451376Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:43:34.845868 kubelet[2608]: E0317 17:43:34.845662 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:34.846079 containerd[1511]: time="2025-03-17T17:43:34.845940856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:2,}" Mar 17 17:43:35.222714 systemd[1]: run-netns-cni\x2def30201c\x2de777\x2d0451\x2d575e\x2d853ba0a1d4a3.mount: Deactivated successfully. Mar 17 17:43:35.492921 containerd[1511]: time="2025-03-17T17:43:35.492665799Z" level=error msg="Failed to destroy network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.493730 containerd[1511]: time="2025-03-17T17:43:35.493235110Z" level=error msg="encountered an error cleaning up failed sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.493730 containerd[1511]: time="2025-03-17T17:43:35.493332893Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.493826 kubelet[2608]: E0317 17:43:35.493711 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.493826 kubelet[2608]: E0317 17:43:35.493798 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:35.493826 kubelet[2608]: E0317 17:43:35.493821 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:35.493982 kubelet[2608]: E0317 17:43:35.493866 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wp522" podUID="92c87a7e-fef7-4c26-ab3b-4e94dca0e582" Mar 17 17:43:35.521401 containerd[1511]: time="2025-03-17T17:43:35.521338859Z" level=error msg="Failed to destroy network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.521824 containerd[1511]: time="2025-03-17T17:43:35.521793254Z" level=error msg="encountered an error cleaning up failed sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.521886 containerd[1511]: time="2025-03-17T17:43:35.521858266Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.522131 kubelet[2608]: E0317 17:43:35.522088 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.522183 kubelet[2608]: E0317 17:43:35.522169 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:35.522221 kubelet[2608]: E0317 17:43:35.522193 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:35.522276 kubelet[2608]: E0317 17:43:35.522252 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:35.531857 containerd[1511]: time="2025-03-17T17:43:35.531708222Z" level=error msg="Failed to destroy network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.532233 containerd[1511]: time="2025-03-17T17:43:35.532210157Z" level=error msg="encountered an error cleaning up failed sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.532367 containerd[1511]: time="2025-03-17T17:43:35.532347525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.532765 kubelet[2608]: E0317 17:43:35.532727 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.533036 kubelet[2608]: E0317 17:43:35.532902 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:35.533036 kubelet[2608]: E0317 17:43:35.532937 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:35.533036 kubelet[2608]: E0317 17:43:35.533001 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" podUID="c27ab8c8-4886-4cfa-ac38-ef82b827b394" Mar 17 17:43:35.545779 containerd[1511]: time="2025-03-17T17:43:35.545551300Z" level=error msg="Failed to destroy network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.546056 containerd[1511]: time="2025-03-17T17:43:35.545990516Z" level=error msg="encountered an error cleaning up failed sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.546115 containerd[1511]: time="2025-03-17T17:43:35.546087278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.546410 kubelet[2608]: E0317 17:43:35.546356 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.546472 kubelet[2608]: E0317 17:43:35.546437 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:35.546472 kubelet[2608]: E0317 17:43:35.546466 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:35.546592 kubelet[2608]: E0317 17:43:35.546518 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" podUID="b74841bb-3d21-4734-85da-f48ab60f9d98" Mar 17 17:43:35.564982 containerd[1511]: time="2025-03-17T17:43:35.564934287Z" level=error msg="Failed to destroy network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.565583 containerd[1511]: time="2025-03-17T17:43:35.565552619Z" level=error msg="encountered an error cleaning up failed sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.565696 containerd[1511]: time="2025-03-17T17:43:35.565677364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.566067 kubelet[2608]: E0317 17:43:35.566015 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.566136 kubelet[2608]: E0317 17:43:35.566096 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:35.566136 kubelet[2608]: E0317 17:43:35.566117 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:35.566191 kubelet[2608]: E0317 17:43:35.566163 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" podUID="bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7" Mar 17 17:43:35.576896 containerd[1511]: time="2025-03-17T17:43:35.576718431Z" level=error msg="Failed to destroy network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.577341 containerd[1511]: time="2025-03-17T17:43:35.577319161Z" level=error msg="encountered an error cleaning up failed sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.577568 containerd[1511]: time="2025-03-17T17:43:35.577432654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.577753 kubelet[2608]: E0317 17:43:35.577718 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.577823 kubelet[2608]: E0317 17:43:35.577777 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:35.577823 kubelet[2608]: E0317 17:43:35.577796 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:35.577882 kubelet[2608]: E0317 17:43:35.577836 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qmtlq" podUID="9de23c76-70f0-4fa7-aa26-f471719ff480" Mar 17 17:43:35.848858 kubelet[2608]: I0317 17:43:35.848678 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a" Mar 17 17:43:35.849757 containerd[1511]: time="2025-03-17T17:43:35.849621716Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" Mar 17 17:43:35.850262 containerd[1511]: time="2025-03-17T17:43:35.849844505Z" level=info msg="Ensure that sandbox 3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a in task-service has been cleanup successfully" Mar 17 17:43:35.850512 containerd[1511]: time="2025-03-17T17:43:35.850476655Z" level=info msg="TearDown network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" successfully" Mar 17 17:43:35.850512 containerd[1511]: time="2025-03-17T17:43:35.850495720Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" returns successfully" Mar 17 17:43:35.851273 containerd[1511]: time="2025-03-17T17:43:35.851123931Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:43:35.851693 containerd[1511]: time="2025-03-17T17:43:35.851304772Z" level=info msg="TearDown network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" successfully" Mar 17 17:43:35.851693 containerd[1511]: time="2025-03-17T17:43:35.851317996Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" returns successfully" Mar 17 17:43:35.851753 kubelet[2608]: I0317 17:43:35.851363 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215" Mar 17 17:43:35.851920 containerd[1511]: time="2025-03-17T17:43:35.851882779Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" Mar 17 17:43:35.852157 containerd[1511]: time="2025-03-17T17:43:35.852122200Z" level=info msg="Ensure that sandbox 26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215 in task-service has been cleanup successfully" Mar 17 17:43:35.852338 containerd[1511]: time="2025-03-17T17:43:35.852318388Z" level=info msg="TearDown network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" successfully" Mar 17 17:43:35.852338 containerd[1511]: time="2025-03-17T17:43:35.852335641Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" returns successfully" Mar 17 17:43:35.852751 containerd[1511]: time="2025-03-17T17:43:35.852701218Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:43:35.852793 containerd[1511]: time="2025-03-17T17:43:35.852774486Z" level=info msg="TearDown network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" successfully" Mar 17 17:43:35.852824 containerd[1511]: time="2025-03-17T17:43:35.852791970Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" returns successfully" Mar 17 17:43:35.853229 containerd[1511]: time="2025-03-17T17:43:35.853195919Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:43:35.853872 containerd[1511]: time="2025-03-17T17:43:35.853303551Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:43:35.853872 containerd[1511]: time="2025-03-17T17:43:35.853318629Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:43:35.853996 kubelet[2608]: I0317 17:43:35.853536 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8" Mar 17 17:43:35.854706 containerd[1511]: time="2025-03-17T17:43:35.854110850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:3,}" Mar 17 17:43:35.854706 containerd[1511]: time="2025-03-17T17:43:35.854418608Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" Mar 17 17:43:35.854706 containerd[1511]: time="2025-03-17T17:43:35.854574301Z" level=info msg="Ensure that sandbox 505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8 in task-service has been cleanup successfully" Mar 17 17:43:35.854867 containerd[1511]: time="2025-03-17T17:43:35.854847095Z" level=info msg="TearDown network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" successfully" Mar 17 17:43:35.854936 containerd[1511]: time="2025-03-17T17:43:35.854922105Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" returns successfully" Mar 17 17:43:35.857000 containerd[1511]: time="2025-03-17T17:43:35.856967353Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:43:35.857094 containerd[1511]: time="2025-03-17T17:43:35.857053935Z" level=info msg="TearDown network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" successfully" Mar 17 17:43:35.857094 containerd[1511]: time="2025-03-17T17:43:35.857064956Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" returns successfully" Mar 17 17:43:35.857587 containerd[1511]: time="2025-03-17T17:43:35.857517297Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:43:35.857647 containerd[1511]: time="2025-03-17T17:43:35.857630489Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:43:35.857647 containerd[1511]: time="2025-03-17T17:43:35.857642462Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:43:35.858518 containerd[1511]: time="2025-03-17T17:43:35.858265384Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:43:35.858518 containerd[1511]: time="2025-03-17T17:43:35.858347318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:43:35.858691 kubelet[2608]: I0317 17:43:35.858313 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408" Mar 17 17:43:35.864925 kubelet[2608]: I0317 17:43:35.864886 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f" Mar 17 17:43:35.867175 containerd[1511]: time="2025-03-17T17:43:35.858428571Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:43:35.867175 containerd[1511]: time="2025-03-17T17:43:35.867047311Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:43:35.867175 containerd[1511]: time="2025-03-17T17:43:35.860524142Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" Mar 17 17:43:35.868191 containerd[1511]: time="2025-03-17T17:43:35.867358407Z" level=info msg="Ensure that sandbox 9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408 in task-service has been cleanup successfully" Mar 17 17:43:35.868191 containerd[1511]: time="2025-03-17T17:43:35.867665014Z" level=info msg="TearDown network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" successfully" Mar 17 17:43:35.868191 containerd[1511]: time="2025-03-17T17:43:35.867677788Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" returns successfully" Mar 17 17:43:35.868191 containerd[1511]: time="2025-03-17T17:43:35.865427224Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" Mar 17 17:43:35.868191 containerd[1511]: time="2025-03-17T17:43:35.867839081Z" level=info msg="Ensure that sandbox af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f in task-service has been cleanup successfully" Mar 17 17:43:35.868191 containerd[1511]: time="2025-03-17T17:43:35.868178720Z" level=info msg="TearDown network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" successfully" Mar 17 17:43:35.868191 containerd[1511]: time="2025-03-17T17:43:35.868191314Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" returns successfully" Mar 17 17:43:35.868413 kubelet[2608]: E0317 17:43:35.868019 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:35.868920 containerd[1511]: time="2025-03-17T17:43:35.868758971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:3,}" Mar 17 17:43:35.868920 containerd[1511]: time="2025-03-17T17:43:35.868883906Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:43:35.869002 containerd[1511]: time="2025-03-17T17:43:35.868975839Z" level=info msg="TearDown network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" successfully" Mar 17 17:43:35.869002 containerd[1511]: time="2025-03-17T17:43:35.868987932Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" returns successfully" Mar 17 17:43:35.870429 containerd[1511]: time="2025-03-17T17:43:35.870296042Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:43:35.870429 containerd[1511]: time="2025-03-17T17:43:35.870344342Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:43:35.870500 containerd[1511]: time="2025-03-17T17:43:35.870438219Z" level=info msg="TearDown network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" successfully" Mar 17 17:43:35.870500 containerd[1511]: time="2025-03-17T17:43:35.870450492Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" returns successfully" Mar 17 17:43:35.870921 containerd[1511]: time="2025-03-17T17:43:35.870718627Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:43:35.870921 containerd[1511]: time="2025-03-17T17:43:35.870749916Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:43:35.870921 containerd[1511]: time="2025-03-17T17:43:35.870729608Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:43:35.870921 containerd[1511]: time="2025-03-17T17:43:35.870860063Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:43:35.870921 containerd[1511]: time="2025-03-17T17:43:35.870874830Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:43:35.871173 kubelet[2608]: I0317 17:43:35.871143 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e" Mar 17 17:43:35.871566 containerd[1511]: time="2025-03-17T17:43:35.871535293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:3,}" Mar 17 17:43:35.871785 containerd[1511]: time="2025-03-17T17:43:35.871763041Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" Mar 17 17:43:35.871833 containerd[1511]: time="2025-03-17T17:43:35.871806153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:43:35.872052 containerd[1511]: time="2025-03-17T17:43:35.871944803Z" level=info msg="Ensure that sandbox f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e in task-service has been cleanup successfully" Mar 17 17:43:35.873573 containerd[1511]: time="2025-03-17T17:43:35.873533702Z" level=info msg="TearDown network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" successfully" Mar 17 17:43:35.873573 containerd[1511]: time="2025-03-17T17:43:35.873560993Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" returns successfully" Mar 17 17:43:35.874218 containerd[1511]: time="2025-03-17T17:43:35.874191950Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:43:35.874390 containerd[1511]: time="2025-03-17T17:43:35.874347463Z" level=info msg="TearDown network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" successfully" Mar 17 17:43:35.874390 containerd[1511]: time="2025-03-17T17:43:35.874367490Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" returns successfully" Mar 17 17:43:35.874819 containerd[1511]: time="2025-03-17T17:43:35.874774826Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:43:35.874916 containerd[1511]: time="2025-03-17T17:43:35.874885394Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:43:35.874989 containerd[1511]: time="2025-03-17T17:43:35.874913958Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:43:35.875300 kubelet[2608]: E0317 17:43:35.875227 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:35.875651 containerd[1511]: time="2025-03-17T17:43:35.875628221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:3,}" Mar 17 17:43:35.941674 containerd[1511]: time="2025-03-17T17:43:35.941586811Z" level=error msg="Failed to destroy network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.942708 containerd[1511]: time="2025-03-17T17:43:35.942671571Z" level=error msg="encountered an error cleaning up failed sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.944906 containerd[1511]: time="2025-03-17T17:43:35.942816935Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.944982 kubelet[2608]: E0317 17:43:35.943061 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:35.944982 kubelet[2608]: E0317 17:43:35.943127 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:35.944982 kubelet[2608]: E0317 17:43:35.943149 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:35.945118 kubelet[2608]: E0317 17:43:35.943208 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:36.223986 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215-shm.mount: Deactivated successfully. Mar 17 17:43:36.224147 systemd[1]: run-netns-cni\x2d6ee72e0b\x2db196\x2d8109\x2d72b5\x2d2f97f46c4ddc.mount: Deactivated successfully. Mar 17 17:43:36.224285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a-shm.mount: Deactivated successfully. Mar 17 17:43:36.759832 containerd[1511]: time="2025-03-17T17:43:36.759664215Z" level=error msg="Failed to destroy network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.760756 containerd[1511]: time="2025-03-17T17:43:36.760612178Z" level=error msg="encountered an error cleaning up failed sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.760756 containerd[1511]: time="2025-03-17T17:43:36.760694653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.761117 kubelet[2608]: E0317 17:43:36.761014 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.761117 kubelet[2608]: E0317 17:43:36.761088 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:36.761117 kubelet[2608]: E0317 17:43:36.761107 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:36.761302 kubelet[2608]: E0317 17:43:36.761162 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" podUID="c27ab8c8-4886-4cfa-ac38-ef82b827b394" Mar 17 17:43:36.761356 containerd[1511]: time="2025-03-17T17:43:36.761191597Z" level=error msg="Failed to destroy network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.762190 containerd[1511]: time="2025-03-17T17:43:36.761975151Z" level=error msg="encountered an error cleaning up failed sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.762190 containerd[1511]: time="2025-03-17T17:43:36.762033281Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.762407 kubelet[2608]: E0317 17:43:36.762167 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.762407 kubelet[2608]: E0317 17:43:36.762205 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:36.762407 kubelet[2608]: E0317 17:43:36.762227 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:36.762489 kubelet[2608]: E0317 17:43:36.762285 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" podUID="b74841bb-3d21-4734-85da-f48ab60f9d98" Mar 17 17:43:36.789222 containerd[1511]: time="2025-03-17T17:43:36.788783194Z" level=error msg="Failed to destroy network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.789512 containerd[1511]: time="2025-03-17T17:43:36.789467361Z" level=error msg="encountered an error cleaning up failed sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.789588 containerd[1511]: time="2025-03-17T17:43:36.789548974Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.790282 kubelet[2608]: E0317 17:43:36.789843 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.790282 kubelet[2608]: E0317 17:43:36.789921 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:36.790282 kubelet[2608]: E0317 17:43:36.789943 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:36.790476 kubelet[2608]: E0317 17:43:36.789992 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wp522" podUID="92c87a7e-fef7-4c26-ab3b-4e94dca0e582" Mar 17 17:43:36.792551 containerd[1511]: time="2025-03-17T17:43:36.792491227Z" level=error msg="Failed to destroy network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.793067 containerd[1511]: time="2025-03-17T17:43:36.793038326Z" level=error msg="encountered an error cleaning up failed sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.794402 containerd[1511]: time="2025-03-17T17:43:36.794361334Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.794994 kubelet[2608]: E0317 17:43:36.794939 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.795049 kubelet[2608]: E0317 17:43:36.795033 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:36.795078 kubelet[2608]: E0317 17:43:36.795057 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:36.795155 kubelet[2608]: E0317 17:43:36.795117 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qmtlq" podUID="9de23c76-70f0-4fa7-aa26-f471719ff480" Mar 17 17:43:36.799020 containerd[1511]: time="2025-03-17T17:43:36.798989818Z" level=error msg="Failed to destroy network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.799488 containerd[1511]: time="2025-03-17T17:43:36.799454281Z" level=error msg="encountered an error cleaning up failed sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.799662 containerd[1511]: time="2025-03-17T17:43:36.799500589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.799732 kubelet[2608]: E0317 17:43:36.799659 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:36.799732 kubelet[2608]: E0317 17:43:36.799711 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:36.799809 kubelet[2608]: E0317 17:43:36.799731 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:36.799809 kubelet[2608]: E0317 17:43:36.799780 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" podUID="bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7" Mar 17 17:43:36.876673 kubelet[2608]: I0317 17:43:36.876627 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b" Mar 17 17:43:36.878144 containerd[1511]: time="2025-03-17T17:43:36.878104566Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\"" Mar 17 17:43:36.879025 containerd[1511]: time="2025-03-17T17:43:36.879001081Z" level=info msg="Ensure that sandbox a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b in task-service has been cleanup successfully" Mar 17 17:43:36.879440 containerd[1511]: time="2025-03-17T17:43:36.879420600Z" level=info msg="TearDown network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" successfully" Mar 17 17:43:36.879476 containerd[1511]: time="2025-03-17T17:43:36.879438203Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" returns successfully" Mar 17 17:43:36.880619 containerd[1511]: time="2025-03-17T17:43:36.880574700Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" Mar 17 17:43:36.880783 containerd[1511]: time="2025-03-17T17:43:36.880685489Z" level=info msg="TearDown network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" successfully" Mar 17 17:43:36.880783 containerd[1511]: time="2025-03-17T17:43:36.880700226Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" returns successfully" Mar 17 17:43:36.882031 containerd[1511]: time="2025-03-17T17:43:36.881176161Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:43:36.882031 containerd[1511]: time="2025-03-17T17:43:36.881318860Z" level=info msg="TearDown network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" successfully" Mar 17 17:43:36.882031 containerd[1511]: time="2025-03-17T17:43:36.881335000Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" returns successfully" Mar 17 17:43:36.882031 containerd[1511]: time="2025-03-17T17:43:36.881767463Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:43:36.882168 containerd[1511]: time="2025-03-17T17:43:36.882074821Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:43:36.882168 containerd[1511]: time="2025-03-17T17:43:36.882086243Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:43:36.882652 kubelet[2608]: I0317 17:43:36.882627 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e" Mar 17 17:43:36.883376 containerd[1511]: time="2025-03-17T17:43:36.883311147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:4,}" Mar 17 17:43:36.884511 containerd[1511]: time="2025-03-17T17:43:36.883964786Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\"" Mar 17 17:43:36.884511 containerd[1511]: time="2025-03-17T17:43:36.884314483Z" level=info msg="Ensure that sandbox f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e in task-service has been cleanup successfully" Mar 17 17:43:36.884618 containerd[1511]: time="2025-03-17T17:43:36.884585713Z" level=info msg="TearDown network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" successfully" Mar 17 17:43:36.884618 containerd[1511]: time="2025-03-17T17:43:36.884612644Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" returns successfully" Mar 17 17:43:36.885689 containerd[1511]: time="2025-03-17T17:43:36.885114698Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" Mar 17 17:43:36.885689 containerd[1511]: time="2025-03-17T17:43:36.885212632Z" level=info msg="TearDown network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" successfully" Mar 17 17:43:36.885689 containerd[1511]: time="2025-03-17T17:43:36.885226528Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" returns successfully" Mar 17 17:43:36.886161 containerd[1511]: time="2025-03-17T17:43:36.886137832Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:43:36.886265 containerd[1511]: time="2025-03-17T17:43:36.886225237Z" level=info msg="TearDown network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" successfully" Mar 17 17:43:36.886265 containerd[1511]: time="2025-03-17T17:43:36.886256155Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" returns successfully" Mar 17 17:43:36.887029 containerd[1511]: time="2025-03-17T17:43:36.886652380Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:43:36.887029 containerd[1511]: time="2025-03-17T17:43:36.886731498Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:43:36.887029 containerd[1511]: time="2025-03-17T17:43:36.886741327Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:43:36.888974 kubelet[2608]: E0317 17:43:36.887294 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:36.889382 containerd[1511]: time="2025-03-17T17:43:36.887547723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:4,}" Mar 17 17:43:36.923691 kubelet[2608]: I0317 17:43:36.923642 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200" Mar 17 17:43:36.925489 containerd[1511]: time="2025-03-17T17:43:36.925451803Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\"" Mar 17 17:43:36.925750 containerd[1511]: time="2025-03-17T17:43:36.925676486Z" level=info msg="Ensure that sandbox 4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200 in task-service has been cleanup successfully" Mar 17 17:43:36.925999 containerd[1511]: time="2025-03-17T17:43:36.925963405Z" level=info msg="TearDown network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" successfully" Mar 17 17:43:36.926053 containerd[1511]: time="2025-03-17T17:43:36.925996737Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" returns successfully" Mar 17 17:43:36.926951 containerd[1511]: time="2025-03-17T17:43:36.926767807Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" Mar 17 17:43:36.928091 containerd[1511]: time="2025-03-17T17:43:36.928053225Z" level=info msg="TearDown network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" successfully" Mar 17 17:43:36.928091 containerd[1511]: time="2025-03-17T17:43:36.928084553Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" returns successfully" Mar 17 17:43:36.929574 containerd[1511]: time="2025-03-17T17:43:36.929550451Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:43:36.929654 containerd[1511]: time="2025-03-17T17:43:36.929637534Z" level=info msg="TearDown network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" successfully" Mar 17 17:43:36.929654 containerd[1511]: time="2025-03-17T17:43:36.929651320Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" returns successfully" Mar 17 17:43:36.931624 containerd[1511]: time="2025-03-17T17:43:36.931392554Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:43:36.931624 containerd[1511]: time="2025-03-17T17:43:36.931540613Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:43:36.931624 containerd[1511]: time="2025-03-17T17:43:36.931552685Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:43:36.931722 kubelet[2608]: I0317 17:43:36.931705 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61" Mar 17 17:43:36.932950 containerd[1511]: time="2025-03-17T17:43:36.932922101Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\"" Mar 17 17:43:36.933563 containerd[1511]: time="2025-03-17T17:43:36.933527529Z" level=info msg="Ensure that sandbox de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61 in task-service has been cleanup successfully" Mar 17 17:43:36.933851 containerd[1511]: time="2025-03-17T17:43:36.933829898Z" level=info msg="TearDown network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" successfully" Mar 17 17:43:36.933955 containerd[1511]: time="2025-03-17T17:43:36.933935387Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" returns successfully" Mar 17 17:43:36.934897 containerd[1511]: time="2025-03-17T17:43:36.934841079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:4,}" Mar 17 17:43:36.936137 containerd[1511]: time="2025-03-17T17:43:36.936110116Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" Mar 17 17:43:36.936250 containerd[1511]: time="2025-03-17T17:43:36.936220904Z" level=info msg="TearDown network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" successfully" Mar 17 17:43:36.936284 containerd[1511]: time="2025-03-17T17:43:36.936248747Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" returns successfully" Mar 17 17:43:36.938054 containerd[1511]: time="2025-03-17T17:43:36.937850058Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:43:36.938054 containerd[1511]: time="2025-03-17T17:43:36.937975474Z" level=info msg="TearDown network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" successfully" Mar 17 17:43:36.938054 containerd[1511]: time="2025-03-17T17:43:36.937989300Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" returns successfully" Mar 17 17:43:36.938559 containerd[1511]: time="2025-03-17T17:43:36.938535116Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:43:36.938747 containerd[1511]: time="2025-03-17T17:43:36.938729011Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:43:36.938809 containerd[1511]: time="2025-03-17T17:43:36.938797290Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:43:36.939684 kubelet[2608]: I0317 17:43:36.939644 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52" Mar 17 17:43:36.940155 containerd[1511]: time="2025-03-17T17:43:36.940133803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:43:36.941767 containerd[1511]: time="2025-03-17T17:43:36.940677705Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\"" Mar 17 17:43:36.941961 containerd[1511]: time="2025-03-17T17:43:36.941932555Z" level=info msg="Ensure that sandbox d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52 in task-service has been cleanup successfully" Mar 17 17:43:36.942174 containerd[1511]: time="2025-03-17T17:43:36.942143121Z" level=info msg="TearDown network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" successfully" Mar 17 17:43:36.942257 containerd[1511]: time="2025-03-17T17:43:36.942179289Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" returns successfully" Mar 17 17:43:36.943659 containerd[1511]: time="2025-03-17T17:43:36.943623606Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" Mar 17 17:43:36.943723 containerd[1511]: time="2025-03-17T17:43:36.943702413Z" level=info msg="TearDown network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" successfully" Mar 17 17:43:36.943723 containerd[1511]: time="2025-03-17T17:43:36.943712392Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" returns successfully" Mar 17 17:43:36.948656 containerd[1511]: time="2025-03-17T17:43:36.948615533Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:43:36.948888 kubelet[2608]: I0317 17:43:36.948775 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d" Mar 17 17:43:36.950259 containerd[1511]: time="2025-03-17T17:43:36.950174605Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\"" Mar 17 17:43:36.950552 containerd[1511]: time="2025-03-17T17:43:36.950523310Z" level=info msg="Ensure that sandbox ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d in task-service has been cleanup successfully" Mar 17 17:43:36.950589 containerd[1511]: time="2025-03-17T17:43:36.950562665Z" level=info msg="TearDown network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" successfully" Mar 17 17:43:36.950589 containerd[1511]: time="2025-03-17T17:43:36.950576120Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" returns successfully" Mar 17 17:43:36.950933 containerd[1511]: time="2025-03-17T17:43:36.950900730Z" level=info msg="TearDown network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" successfully" Mar 17 17:43:36.950971 containerd[1511]: time="2025-03-17T17:43:36.950922231Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" returns successfully" Mar 17 17:43:36.951776 containerd[1511]: time="2025-03-17T17:43:36.951581971Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" Mar 17 17:43:36.951776 containerd[1511]: time="2025-03-17T17:43:36.951669185Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:43:36.951776 containerd[1511]: time="2025-03-17T17:43:36.951724409Z" level=info msg="TearDown network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" successfully" Mar 17 17:43:36.951776 containerd[1511]: time="2025-03-17T17:43:36.951764915Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" returns successfully" Mar 17 17:43:36.952134 containerd[1511]: time="2025-03-17T17:43:36.951784182Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:43:36.952134 containerd[1511]: time="2025-03-17T17:43:36.951807385Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:43:36.952302 containerd[1511]: time="2025-03-17T17:43:36.952223367Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:43:36.952380 containerd[1511]: time="2025-03-17T17:43:36.952347070Z" level=info msg="TearDown network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" successfully" Mar 17 17:43:36.952380 containerd[1511]: time="2025-03-17T17:43:36.952372778Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" returns successfully" Mar 17 17:43:36.952549 containerd[1511]: time="2025-03-17T17:43:36.952520216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:43:36.953036 containerd[1511]: time="2025-03-17T17:43:36.952841380Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:43:36.953036 containerd[1511]: time="2025-03-17T17:43:36.953012120Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:43:36.953036 containerd[1511]: time="2025-03-17T17:43:36.953023381Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:43:36.953234 kubelet[2608]: E0317 17:43:36.953208 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:36.954280 containerd[1511]: time="2025-03-17T17:43:36.954255479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:4,}" Mar 17 17:43:37.037487 containerd[1511]: time="2025-03-17T17:43:37.034287510Z" level=error msg="Failed to destroy network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.037487 containerd[1511]: time="2025-03-17T17:43:37.034976886Z" level=error msg="encountered an error cleaning up failed sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.037487 containerd[1511]: time="2025-03-17T17:43:37.035028774Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.037487 containerd[1511]: time="2025-03-17T17:43:37.035727738Z" level=error msg="Failed to destroy network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.037702 kubelet[2608]: E0317 17:43:37.035378 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.037702 kubelet[2608]: E0317 17:43:37.035445 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:37.037702 kubelet[2608]: E0317 17:43:37.035470 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:37.037790 kubelet[2608]: E0317 17:43:37.035515 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" podUID="bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7" Mar 17 17:43:37.038624 containerd[1511]: time="2025-03-17T17:43:37.037856961Z" level=error msg="encountered an error cleaning up failed sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.038624 containerd[1511]: time="2025-03-17T17:43:37.038171553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.038690 kubelet[2608]: E0317 17:43:37.038367 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.038690 kubelet[2608]: E0317 17:43:37.038401 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:37.038690 kubelet[2608]: E0317 17:43:37.038418 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:37.038769 kubelet[2608]: E0317 17:43:37.038445 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wp522" podUID="92c87a7e-fef7-4c26-ab3b-4e94dca0e582" Mar 17 17:43:37.088234 containerd[1511]: time="2025-03-17T17:43:37.088182444Z" level=error msg="Failed to destroy network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.088910 containerd[1511]: time="2025-03-17T17:43:37.088867742Z" level=error msg="encountered an error cleaning up failed sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.088984 containerd[1511]: time="2025-03-17T17:43:37.088936330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.089423 kubelet[2608]: E0317 17:43:37.089382 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.089548 kubelet[2608]: E0317 17:43:37.089455 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:37.089548 kubelet[2608]: E0317 17:43:37.089475 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:37.089548 kubelet[2608]: E0317 17:43:37.089516 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" podUID="c27ab8c8-4886-4cfa-ac38-ef82b827b394" Mar 17 17:43:37.093970 containerd[1511]: time="2025-03-17T17:43:37.093925412Z" level=error msg="Failed to destroy network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.094461 containerd[1511]: time="2025-03-17T17:43:37.094410163Z" level=error msg="encountered an error cleaning up failed sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.094642 containerd[1511]: time="2025-03-17T17:43:37.094570835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.094766 kubelet[2608]: E0317 17:43:37.094733 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.094841 kubelet[2608]: E0317 17:43:37.094785 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:37.094841 kubelet[2608]: E0317 17:43:37.094804 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:37.094897 kubelet[2608]: E0317 17:43:37.094837 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:37.100505 containerd[1511]: time="2025-03-17T17:43:37.100379436Z" level=error msg="Failed to destroy network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.100922 containerd[1511]: time="2025-03-17T17:43:37.100863867Z" level=error msg="encountered an error cleaning up failed sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.100984 containerd[1511]: time="2025-03-17T17:43:37.100957263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.101839 kubelet[2608]: E0317 17:43:37.101302 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.101839 kubelet[2608]: E0317 17:43:37.101421 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:37.101839 kubelet[2608]: E0317 17:43:37.101460 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:37.102076 kubelet[2608]: E0317 17:43:37.101516 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" podUID="b74841bb-3d21-4734-85da-f48ab60f9d98" Mar 17 17:43:37.113220 containerd[1511]: time="2025-03-17T17:43:37.113164395Z" level=error msg="Failed to destroy network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.113763 containerd[1511]: time="2025-03-17T17:43:37.113731701Z" level=error msg="encountered an error cleaning up failed sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.113839 containerd[1511]: time="2025-03-17T17:43:37.113793317Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.114069 kubelet[2608]: E0317 17:43:37.114025 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:37.114188 kubelet[2608]: E0317 17:43:37.114096 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:37.114188 kubelet[2608]: E0317 17:43:37.114116 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:37.114188 kubelet[2608]: E0317 17:43:37.114157 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qmtlq" podUID="9de23c76-70f0-4fa7-aa26-f471719ff480" Mar 17 17:43:37.226050 systemd[1]: run-netns-cni\x2dc603562c\x2d26bf\x2d38ba\x2d7c80\x2da7c275c9f4fb.mount: Deactivated successfully. Mar 17 17:43:37.226158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d-shm.mount: Deactivated successfully. Mar 17 17:43:37.226251 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b-shm.mount: Deactivated successfully. Mar 17 17:43:37.226330 systemd[1]: run-netns-cni\x2db629d1fc\x2d7dd1\x2d66ee\x2dc731\x2dd244f21229a8.mount: Deactivated successfully. Mar 17 17:43:37.226403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52-shm.mount: Deactivated successfully. Mar 17 17:43:37.226478 systemd[1]: run-netns-cni\x2d08ab914b\x2d7356\x2d9d23\x2dfa14\x2d29b69a7879c5.mount: Deactivated successfully. Mar 17 17:43:37.226552 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61-shm.mount: Deactivated successfully. Mar 17 17:43:37.226629 systemd[1]: run-netns-cni\x2db8336772\x2db4d0\x2ddf5e\x2d2750\x2d3dc12a1a247b.mount: Deactivated successfully. Mar 17 17:43:37.585915 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1655924560.mount: Deactivated successfully. Mar 17 17:43:37.858955 containerd[1511]: time="2025-03-17T17:43:37.858791874Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:37.859982 containerd[1511]: time="2025-03-17T17:43:37.859926909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:43:37.861411 containerd[1511]: time="2025-03-17T17:43:37.861372737Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:37.863539 containerd[1511]: time="2025-03-17T17:43:37.863503283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:37.869832 containerd[1511]: time="2025-03-17T17:43:37.869786277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 5.068631971s" Mar 17 17:43:37.869832 containerd[1511]: time="2025-03-17T17:43:37.869827664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:43:37.887881 containerd[1511]: time="2025-03-17T17:43:37.887812359Z" level=info msg="CreateContainer within sandbox \"2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:43:37.952580 kubelet[2608]: I0317 17:43:37.952548 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0" Mar 17 17:43:37.953302 containerd[1511]: time="2025-03-17T17:43:37.953150959Z" level=info msg="StopPodSandbox for \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\"" Mar 17 17:43:37.953390 containerd[1511]: time="2025-03-17T17:43:37.953365202Z" level=info msg="Ensure that sandbox 13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0 in task-service has been cleanup successfully" Mar 17 17:43:37.953817 containerd[1511]: time="2025-03-17T17:43:37.953706874Z" level=info msg="TearDown network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" successfully" Mar 17 17:43:37.953817 containerd[1511]: time="2025-03-17T17:43:37.953740327Z" level=info msg="StopPodSandbox for \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" returns successfully" Mar 17 17:43:37.954069 containerd[1511]: time="2025-03-17T17:43:37.954049348Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\"" Mar 17 17:43:37.954147 containerd[1511]: time="2025-03-17T17:43:37.954132264Z" level=info msg="TearDown network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" successfully" Mar 17 17:43:37.954147 containerd[1511]: time="2025-03-17T17:43:37.954144978Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" returns successfully" Mar 17 17:43:37.954563 containerd[1511]: time="2025-03-17T17:43:37.954505806Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" Mar 17 17:43:37.954886 containerd[1511]: time="2025-03-17T17:43:37.954843011Z" level=info msg="TearDown network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" successfully" Mar 17 17:43:37.954886 containerd[1511]: time="2025-03-17T17:43:37.954873277Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" returns successfully" Mar 17 17:43:37.955512 containerd[1511]: time="2025-03-17T17:43:37.955491470Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:43:37.955618 containerd[1511]: time="2025-03-17T17:43:37.955598200Z" level=info msg="TearDown network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" successfully" Mar 17 17:43:37.955618 containerd[1511]: time="2025-03-17T17:43:37.955613970Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" returns successfully" Mar 17 17:43:37.955838 containerd[1511]: time="2025-03-17T17:43:37.955814386Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:43:37.955923 containerd[1511]: time="2025-03-17T17:43:37.955901731Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:43:37.955923 containerd[1511]: time="2025-03-17T17:43:37.955914856Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:43:37.956018 kubelet[2608]: I0317 17:43:37.955995 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a" Mar 17 17:43:37.956408 kubelet[2608]: E0317 17:43:37.956377 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:37.956452 containerd[1511]: time="2025-03-17T17:43:37.956388316Z" level=info msg="StopPodSandbox for \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\"" Mar 17 17:43:37.956818 containerd[1511]: time="2025-03-17T17:43:37.956536204Z" level=info msg="Ensure that sandbox 3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a in task-service has been cleanup successfully" Mar 17 17:43:37.956818 containerd[1511]: time="2025-03-17T17:43:37.956587330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:5,}" Mar 17 17:43:37.956818 containerd[1511]: time="2025-03-17T17:43:37.956684984Z" level=info msg="TearDown network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" successfully" Mar 17 17:43:37.956818 containerd[1511]: time="2025-03-17T17:43:37.956696185Z" level=info msg="StopPodSandbox for \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" returns successfully" Mar 17 17:43:37.957032 containerd[1511]: time="2025-03-17T17:43:37.957015615Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\"" Mar 17 17:43:37.957168 containerd[1511]: time="2025-03-17T17:43:37.957087441Z" level=info msg="TearDown network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" successfully" Mar 17 17:43:37.957168 containerd[1511]: time="2025-03-17T17:43:37.957099333Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" returns successfully" Mar 17 17:43:37.957348 containerd[1511]: time="2025-03-17T17:43:37.957327241Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" Mar 17 17:43:37.957412 containerd[1511]: time="2025-03-17T17:43:37.957398786Z" level=info msg="TearDown network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" successfully" Mar 17 17:43:37.957412 containerd[1511]: time="2025-03-17T17:43:37.957409967Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" returns successfully" Mar 17 17:43:37.957578 containerd[1511]: time="2025-03-17T17:43:37.957553347Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:43:37.957652 containerd[1511]: time="2025-03-17T17:43:37.957628297Z" level=info msg="TearDown network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" successfully" Mar 17 17:43:37.957652 containerd[1511]: time="2025-03-17T17:43:37.957640139Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" returns successfully" Mar 17 17:43:37.958115 containerd[1511]: time="2025-03-17T17:43:37.957910859Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:43:37.958115 containerd[1511]: time="2025-03-17T17:43:37.957986300Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:43:37.958115 containerd[1511]: time="2025-03-17T17:43:37.957995488Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:43:37.958489 containerd[1511]: time="2025-03-17T17:43:37.958464780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:5,}" Mar 17 17:43:37.959324 kubelet[2608]: I0317 17:43:37.959270 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513" Mar 17 17:43:37.959809 containerd[1511]: time="2025-03-17T17:43:37.959643366Z" level=info msg="StopPodSandbox for \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\"" Mar 17 17:43:37.960015 containerd[1511]: time="2025-03-17T17:43:37.959996320Z" level=info msg="Ensure that sandbox dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513 in task-service has been cleanup successfully" Mar 17 17:43:37.960340 containerd[1511]: time="2025-03-17T17:43:37.960321411Z" level=info msg="TearDown network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" successfully" Mar 17 17:43:37.960340 containerd[1511]: time="2025-03-17T17:43:37.960333864Z" level=info msg="StopPodSandbox for \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" returns successfully" Mar 17 17:43:37.961115 containerd[1511]: time="2025-03-17T17:43:37.960912662Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\"" Mar 17 17:43:37.961115 containerd[1511]: time="2025-03-17T17:43:37.961012921Z" level=info msg="TearDown network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" successfully" Mar 17 17:43:37.961115 containerd[1511]: time="2025-03-17T17:43:37.961024543Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" returns successfully" Mar 17 17:43:37.961287 containerd[1511]: time="2025-03-17T17:43:37.961265566Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" Mar 17 17:43:37.961353 containerd[1511]: time="2025-03-17T17:43:37.961339896Z" level=info msg="TearDown network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" successfully" Mar 17 17:43:37.961353 containerd[1511]: time="2025-03-17T17:43:37.961351117Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" returns successfully" Mar 17 17:43:37.961406 kubelet[2608]: I0317 17:43:37.961347 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0" Mar 17 17:43:37.961764 containerd[1511]: time="2025-03-17T17:43:37.961707607Z" level=info msg="StopPodSandbox for \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\"" Mar 17 17:43:37.961811 containerd[1511]: time="2025-03-17T17:43:37.961712296Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:43:37.961929 containerd[1511]: time="2025-03-17T17:43:37.961900790Z" level=info msg="TearDown network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" successfully" Mar 17 17:43:37.961929 containerd[1511]: time="2025-03-17T17:43:37.961913094Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" returns successfully" Mar 17 17:43:37.962042 containerd[1511]: time="2025-03-17T17:43:37.962015195Z" level=info msg="Ensure that sandbox 46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0 in task-service has been cleanup successfully" Mar 17 17:43:37.962406 containerd[1511]: time="2025-03-17T17:43:37.962309669Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:43:37.962406 containerd[1511]: time="2025-03-17T17:43:37.962393607Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:43:37.962406 containerd[1511]: time="2025-03-17T17:43:37.962402153Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:43:37.962571 containerd[1511]: time="2025-03-17T17:43:37.962314348Z" level=info msg="TearDown network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" successfully" Mar 17 17:43:37.962571 containerd[1511]: time="2025-03-17T17:43:37.962449773Z" level=info msg="StopPodSandbox for \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" returns successfully" Mar 17 17:43:37.963060 containerd[1511]: time="2025-03-17T17:43:37.962867098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:43:37.963060 containerd[1511]: time="2025-03-17T17:43:37.962917041Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\"" Mar 17 17:43:37.963060 containerd[1511]: time="2025-03-17T17:43:37.963004716Z" level=info msg="TearDown network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" successfully" Mar 17 17:43:37.963060 containerd[1511]: time="2025-03-17T17:43:37.963014043Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" returns successfully" Mar 17 17:43:37.963401 containerd[1511]: time="2025-03-17T17:43:37.963292437Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" Mar 17 17:43:37.963449 containerd[1511]: time="2025-03-17T17:43:37.963410228Z" level=info msg="TearDown network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" successfully" Mar 17 17:43:37.963449 containerd[1511]: time="2025-03-17T17:43:37.963423994Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" returns successfully" Mar 17 17:43:37.963658 kubelet[2608]: I0317 17:43:37.963599 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a" Mar 17 17:43:37.963731 containerd[1511]: time="2025-03-17T17:43:37.963706285Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:43:37.963824 containerd[1511]: time="2025-03-17T17:43:37.963804339Z" level=info msg="TearDown network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" successfully" Mar 17 17:43:37.963851 containerd[1511]: time="2025-03-17T17:43:37.963822483Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" returns successfully" Mar 17 17:43:37.964084 containerd[1511]: time="2025-03-17T17:43:37.964062995Z" level=info msg="StopPodSandbox for \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\"" Mar 17 17:43:37.964389 containerd[1511]: time="2025-03-17T17:43:37.964261058Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:43:37.964389 containerd[1511]: time="2025-03-17T17:43:37.964290063Z" level=info msg="Ensure that sandbox f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a in task-service has been cleanup successfully" Mar 17 17:43:37.964389 containerd[1511]: time="2025-03-17T17:43:37.964342071Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:43:37.964389 containerd[1511]: time="2025-03-17T17:43:37.964351909Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:43:37.964520 containerd[1511]: time="2025-03-17T17:43:37.964473768Z" level=info msg="TearDown network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" successfully" Mar 17 17:43:37.964520 containerd[1511]: time="2025-03-17T17:43:37.964488245Z" level=info msg="StopPodSandbox for \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" returns successfully" Mar 17 17:43:37.964739 containerd[1511]: time="2025-03-17T17:43:37.964718689Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\"" Mar 17 17:43:37.964813 containerd[1511]: time="2025-03-17T17:43:37.964799591Z" level=info msg="TearDown network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" successfully" Mar 17 17:43:37.964846 containerd[1511]: time="2025-03-17T17:43:37.964812635Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" returns successfully" Mar 17 17:43:37.964894 containerd[1511]: time="2025-03-17T17:43:37.964844124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:5,}" Mar 17 17:43:37.965690 containerd[1511]: time="2025-03-17T17:43:37.965315140Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" Mar 17 17:43:37.965690 containerd[1511]: time="2025-03-17T17:43:37.965598804Z" level=info msg="TearDown network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" successfully" Mar 17 17:43:37.965690 containerd[1511]: time="2025-03-17T17:43:37.965611107Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" returns successfully" Mar 17 17:43:37.965893 containerd[1511]: time="2025-03-17T17:43:37.965872277Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:43:37.966066 containerd[1511]: time="2025-03-17T17:43:37.966047497Z" level=info msg="TearDown network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" successfully" Mar 17 17:43:37.966183 containerd[1511]: time="2025-03-17T17:43:37.966124391Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" returns successfully" Mar 17 17:43:37.966275 kubelet[2608]: I0317 17:43:37.966226 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1" Mar 17 17:43:37.966424 containerd[1511]: time="2025-03-17T17:43:37.966387166Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:43:37.966802 containerd[1511]: time="2025-03-17T17:43:37.966492994Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:43:37.966802 containerd[1511]: time="2025-03-17T17:43:37.966511469Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:43:37.966802 containerd[1511]: time="2025-03-17T17:43:37.966552356Z" level=info msg="StopPodSandbox for \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\"" Mar 17 17:43:37.966802 containerd[1511]: time="2025-03-17T17:43:37.966690255Z" level=info msg="Ensure that sandbox 2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1 in task-service has been cleanup successfully" Mar 17 17:43:37.966932 kubelet[2608]: E0317 17:43:37.966748 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:37.967040 containerd[1511]: time="2025-03-17T17:43:37.967004867Z" level=info msg="TearDown network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" successfully" Mar 17 17:43:37.967040 containerd[1511]: time="2025-03-17T17:43:37.967018022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:5,}" Mar 17 17:43:37.967275 containerd[1511]: time="2025-03-17T17:43:37.967022891Z" level=info msg="StopPodSandbox for \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" returns successfully" Mar 17 17:43:37.967392 containerd[1511]: time="2025-03-17T17:43:37.967353522Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\"" Mar 17 17:43:37.967472 containerd[1511]: time="2025-03-17T17:43:37.967429846Z" level=info msg="TearDown network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" successfully" Mar 17 17:43:37.967472 containerd[1511]: time="2025-03-17T17:43:37.967438543Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" returns successfully" Mar 17 17:43:37.967659 containerd[1511]: time="2025-03-17T17:43:37.967632778Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" Mar 17 17:43:37.967738 containerd[1511]: time="2025-03-17T17:43:37.967719360Z" level=info msg="TearDown network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" successfully" Mar 17 17:43:37.967738 containerd[1511]: time="2025-03-17T17:43:37.967733707Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" returns successfully" Mar 17 17:43:37.969865 containerd[1511]: time="2025-03-17T17:43:37.969808839Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:43:37.969932 containerd[1511]: time="2025-03-17T17:43:37.969916131Z" level=info msg="TearDown network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" successfully" Mar 17 17:43:37.969955 containerd[1511]: time="2025-03-17T17:43:37.969930017Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" returns successfully" Mar 17 17:43:37.970189 containerd[1511]: time="2025-03-17T17:43:37.970170138Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:43:37.970287 containerd[1511]: time="2025-03-17T17:43:37.970258614Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:43:37.970287 containerd[1511]: time="2025-03-17T17:43:37.970272171Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:43:37.970680 containerd[1511]: time="2025-03-17T17:43:37.970656993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:43:38.048332 containerd[1511]: time="2025-03-17T17:43:38.048232046Z" level=info msg="CreateContainer within sandbox \"2c98777e56d29a8dd0ce920669c4df7a9bf6cd4047b73997c53484383de257e7\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"25dbdd932159fe57474932200b671866109421afcc56b903349ba4289b55f8a8\"" Mar 17 17:43:38.048782 containerd[1511]: time="2025-03-17T17:43:38.048732457Z" level=info msg="StartContainer for \"25dbdd932159fe57474932200b671866109421afcc56b903349ba4289b55f8a8\"" Mar 17 17:43:38.161422 systemd[1]: Started cri-containerd-25dbdd932159fe57474932200b671866109421afcc56b903349ba4289b55f8a8.scope - libcontainer container 25dbdd932159fe57474932200b671866109421afcc56b903349ba4289b55f8a8. Mar 17 17:43:38.234839 systemd[1]: run-netns-cni\x2d20d756e7\x2d434b\x2d865e\x2dbe62\x2deda46692a765.mount: Deactivated successfully. Mar 17 17:43:38.234980 systemd[1]: run-netns-cni\x2d3bfdbd51\x2d6a23\x2df2b8\x2d5388\x2d23a625a965d8.mount: Deactivated successfully. Mar 17 17:43:38.235056 systemd[1]: run-netns-cni\x2d45e69775\x2dd52d\x2db64d\x2de419\x2d2f3ed0338335.mount: Deactivated successfully. Mar 17 17:43:38.235130 systemd[1]: run-netns-cni\x2d287b80ce\x2ddecc\x2d0d09\x2dd573\x2d490550ededff.mount: Deactivated successfully. Mar 17 17:43:38.235205 systemd[1]: run-netns-cni\x2dc64bf3f4\x2d666b\x2d6029\x2d98dd\x2d3ee391da6e1b.mount: Deactivated successfully. Mar 17 17:43:38.235319 systemd[1]: run-netns-cni\x2dbaeccf81\x2d3919\x2d0c5e\x2dd5ae\x2dd99957a6eeec.mount: Deactivated successfully. Mar 17 17:43:38.266312 containerd[1511]: time="2025-03-17T17:43:38.265504967Z" level=error msg="Failed to destroy network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.266312 containerd[1511]: time="2025-03-17T17:43:38.265989317Z" level=error msg="encountered an error cleaning up failed sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.266312 containerd[1511]: time="2025-03-17T17:43:38.266049151Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.266708 kubelet[2608]: E0317 17:43:38.266370 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.266708 kubelet[2608]: E0317 17:43:38.266448 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:38.266708 kubelet[2608]: E0317 17:43:38.266471 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" Mar 17 17:43:38.266953 kubelet[2608]: E0317 17:43:38.266510 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-8jnnv_calico-apiserver(c27ab8c8-4886-4cfa-ac38-ef82b827b394)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" podUID="c27ab8c8-4886-4cfa-ac38-ef82b827b394" Mar 17 17:43:38.270011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91-shm.mount: Deactivated successfully. Mar 17 17:43:38.289043 containerd[1511]: time="2025-03-17T17:43:38.288827581Z" level=error msg="Failed to destroy network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.291364 containerd[1511]: time="2025-03-17T17:43:38.291326530Z" level=error msg="Failed to destroy network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.291899 containerd[1511]: time="2025-03-17T17:43:38.291875711Z" level=error msg="encountered an error cleaning up failed sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.292052 containerd[1511]: time="2025-03-17T17:43:38.292024321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.295225 containerd[1511]: time="2025-03-17T17:43:38.294430454Z" level=error msg="encountered an error cleaning up failed sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.295225 containerd[1511]: time="2025-03-17T17:43:38.294518459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.295376 kubelet[2608]: E0317 17:43:38.292558 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.295376 kubelet[2608]: E0317 17:43:38.292707 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:38.295376 kubelet[2608]: E0317 17:43:38.292757 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" Mar 17 17:43:38.292671 systemd[1]: Started sshd@8-10.0.0.14:22-10.0.0.1:57830.service - OpenSSH per-connection server daemon (10.0.0.1:57830). Mar 17 17:43:38.295714 kubelet[2608]: E0317 17:43:38.292858 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86f7c466bd-bf4pr_calico-system(bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" podUID="bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7" Mar 17 17:43:38.295575 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c-shm.mount: Deactivated successfully. Mar 17 17:43:38.295698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f-shm.mount: Deactivated successfully. Mar 17 17:43:38.295886 kubelet[2608]: E0317 17:43:38.295784 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.295886 kubelet[2608]: E0317 17:43:38.295842 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:38.295987 kubelet[2608]: E0317 17:43:38.295875 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-j8ss7" Mar 17 17:43:38.296096 kubelet[2608]: E0317 17:43:38.296063 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-j8ss7_calico-system(8ed5b12a-6d88-43a8-8215-c1e4e9724067)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-j8ss7" podUID="8ed5b12a-6d88-43a8-8215-c1e4e9724067" Mar 17 17:43:38.336963 containerd[1511]: time="2025-03-17T17:43:38.336901235Z" level=info msg="StartContainer for \"25dbdd932159fe57474932200b671866109421afcc56b903349ba4289b55f8a8\" returns successfully" Mar 17 17:43:38.352953 containerd[1511]: time="2025-03-17T17:43:38.352886165Z" level=error msg="Failed to destroy network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.355900 containerd[1511]: time="2025-03-17T17:43:38.353688022Z" level=error msg="encountered an error cleaning up failed sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.355900 containerd[1511]: time="2025-03-17T17:43:38.353761841Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.355659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587-shm.mount: Deactivated successfully. Mar 17 17:43:38.356178 kubelet[2608]: E0317 17:43:38.354049 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.356178 kubelet[2608]: E0317 17:43:38.354126 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:38.356178 kubelet[2608]: E0317 17:43:38.354149 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" Mar 17 17:43:38.356489 kubelet[2608]: E0317 17:43:38.354198 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cbddc9666-fgfrs_calico-apiserver(b74841bb-3d21-4734-85da-f48ab60f9d98)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" podUID="b74841bb-3d21-4734-85da-f48ab60f9d98" Mar 17 17:43:38.362282 containerd[1511]: time="2025-03-17T17:43:38.362190626Z" level=error msg="Failed to destroy network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.365892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2-shm.mount: Deactivated successfully. Mar 17 17:43:38.366556 containerd[1511]: time="2025-03-17T17:43:38.366507913Z" level=error msg="encountered an error cleaning up failed sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.367192 containerd[1511]: time="2025-03-17T17:43:38.367141734Z" level=error msg="Failed to destroy network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.367416 containerd[1511]: time="2025-03-17T17:43:38.367148497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.368432 kubelet[2608]: E0317 17:43:38.367828 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.368432 kubelet[2608]: E0317 17:43:38.367911 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:38.368432 kubelet[2608]: E0317 17:43:38.367935 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-qmtlq" Mar 17 17:43:38.368613 containerd[1511]: time="2025-03-17T17:43:38.368087151Z" level=error msg="encountered an error cleaning up failed sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.368613 containerd[1511]: time="2025-03-17T17:43:38.368148356Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.368684 kubelet[2608]: E0317 17:43:38.367980 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-qmtlq_kube-system(9de23c76-70f0-4fa7-aa26-f471719ff480)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-qmtlq" podUID="9de23c76-70f0-4fa7-aa26-f471719ff480" Mar 17 17:43:38.368684 kubelet[2608]: E0317 17:43:38.368418 2608 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:43:38.368684 kubelet[2608]: E0317 17:43:38.368599 2608 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:38.368807 kubelet[2608]: E0317 17:43:38.368622 2608 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-wp522" Mar 17 17:43:38.368807 kubelet[2608]: E0317 17:43:38.368672 2608 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-wp522_kube-system(92c87a7e-fef7-4c26-ab3b-4e94dca0e582)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-wp522" podUID="92c87a7e-fef7-4c26-ab3b-4e94dca0e582" Mar 17 17:43:38.377163 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:43:38.377257 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:43:38.381943 sshd[4774]: Accepted publickey for core from 10.0.0.1 port 57830 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:43:38.383920 sshd-session[4774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:38.389199 systemd-logind[1494]: New session 9 of user core. Mar 17 17:43:38.394436 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:43:38.531826 sshd[4789]: Connection closed by 10.0.0.1 port 57830 Mar 17 17:43:38.531312 sshd-session[4774]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:38.537766 systemd[1]: sshd@8-10.0.0.14:22-10.0.0.1:57830.service: Deactivated successfully. Mar 17 17:43:38.540363 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:43:38.541227 systemd-logind[1494]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:43:38.542504 systemd-logind[1494]: Removed session 9. Mar 17 17:43:38.977555 kubelet[2608]: I0317 17:43:38.977516 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919" Mar 17 17:43:38.978158 containerd[1511]: time="2025-03-17T17:43:38.978116545Z" level=info msg="StopPodSandbox for \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\"" Mar 17 17:43:38.978465 containerd[1511]: time="2025-03-17T17:43:38.978335236Z" level=info msg="Ensure that sandbox b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919 in task-service has been cleanup successfully" Mar 17 17:43:38.978556 containerd[1511]: time="2025-03-17T17:43:38.978534410Z" level=info msg="TearDown network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\" successfully" Mar 17 17:43:38.978593 containerd[1511]: time="2025-03-17T17:43:38.978550761Z" level=info msg="StopPodSandbox for \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\" returns successfully" Mar 17 17:43:38.979025 containerd[1511]: time="2025-03-17T17:43:38.978952876Z" level=info msg="StopPodSandbox for \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\"" Mar 17 17:43:38.979181 containerd[1511]: time="2025-03-17T17:43:38.979095325Z" level=info msg="TearDown network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" successfully" Mar 17 17:43:38.979181 containerd[1511]: time="2025-03-17T17:43:38.979107588Z" level=info msg="StopPodSandbox for \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" returns successfully" Mar 17 17:43:38.979759 containerd[1511]: time="2025-03-17T17:43:38.979633706Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\"" Mar 17 17:43:38.979759 containerd[1511]: time="2025-03-17T17:43:38.979718637Z" level=info msg="TearDown network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" successfully" Mar 17 17:43:38.979759 containerd[1511]: time="2025-03-17T17:43:38.979729327Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" returns successfully" Mar 17 17:43:38.979985 containerd[1511]: time="2025-03-17T17:43:38.979952646Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" Mar 17 17:43:38.980068 containerd[1511]: time="2025-03-17T17:43:38.980049618Z" level=info msg="TearDown network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" successfully" Mar 17 17:43:38.980068 containerd[1511]: time="2025-03-17T17:43:38.980060348Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" returns successfully" Mar 17 17:43:38.980880 kubelet[2608]: I0317 17:43:38.980383 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f" Mar 17 17:43:38.980939 containerd[1511]: time="2025-03-17T17:43:38.980484907Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:43:38.980939 containerd[1511]: time="2025-03-17T17:43:38.980565098Z" level=info msg="TearDown network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" successfully" Mar 17 17:43:38.980939 containerd[1511]: time="2025-03-17T17:43:38.980574265Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" returns successfully" Mar 17 17:43:38.981362 containerd[1511]: time="2025-03-17T17:43:38.981331037Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:43:38.981499 containerd[1511]: time="2025-03-17T17:43:38.981466492Z" level=info msg="StopPodSandbox for \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\"" Mar 17 17:43:38.981732 containerd[1511]: time="2025-03-17T17:43:38.981700913Z" level=info msg="Ensure that sandbox 73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f in task-service has been cleanup successfully" Mar 17 17:43:38.981845 containerd[1511]: time="2025-03-17T17:43:38.981472494Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:43:38.981879 containerd[1511]: time="2025-03-17T17:43:38.981842939Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:43:38.982045 kubelet[2608]: E0317 17:43:38.982022 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.982255525Z" level=info msg="TearDown network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\" successfully" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.982275292Z" level=info msg="StopPodSandbox for \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\" returns successfully" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.982702475Z" level=info msg="StopPodSandbox for \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\"" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.982807183Z" level=info msg="TearDown network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" successfully" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.982818835Z" level=info msg="StopPodSandbox for \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" returns successfully" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.983000847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:6,}" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.983763700Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\"" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.983859331Z" level=info msg="TearDown network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" successfully" Mar 17 17:43:38.984257 containerd[1511]: time="2025-03-17T17:43:38.983875391Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" returns successfully" Mar 17 17:43:38.984501 containerd[1511]: time="2025-03-17T17:43:38.984306081Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" Mar 17 17:43:38.984501 containerd[1511]: time="2025-03-17T17:43:38.984406128Z" level=info msg="TearDown network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" successfully" Mar 17 17:43:38.984501 containerd[1511]: time="2025-03-17T17:43:38.984417840Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" returns successfully" Mar 17 17:43:38.984808 containerd[1511]: time="2025-03-17T17:43:38.984779891Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:43:38.984932 containerd[1511]: time="2025-03-17T17:43:38.984870832Z" level=info msg="TearDown network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" successfully" Mar 17 17:43:38.984932 containerd[1511]: time="2025-03-17T17:43:38.984887513Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" returns successfully" Mar 17 17:43:38.989325 containerd[1511]: time="2025-03-17T17:43:38.988778618Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:43:38.989325 containerd[1511]: time="2025-03-17T17:43:38.988894927Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:43:38.989325 containerd[1511]: time="2025-03-17T17:43:38.988935674Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:43:38.989568 containerd[1511]: time="2025-03-17T17:43:38.989445301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:6,}" Mar 17 17:43:38.990322 kubelet[2608]: I0317 17:43:38.989919 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91" Mar 17 17:43:38.990813 containerd[1511]: time="2025-03-17T17:43:38.990603619Z" level=info msg="StopPodSandbox for \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\"" Mar 17 17:43:38.990883 containerd[1511]: time="2025-03-17T17:43:38.990821919Z" level=info msg="Ensure that sandbox b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91 in task-service has been cleanup successfully" Mar 17 17:43:38.991036 containerd[1511]: time="2025-03-17T17:43:38.991018188Z" level=info msg="TearDown network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\" successfully" Mar 17 17:43:38.991036 containerd[1511]: time="2025-03-17T17:43:38.991033467Z" level=info msg="StopPodSandbox for \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\" returns successfully" Mar 17 17:43:38.991469 containerd[1511]: time="2025-03-17T17:43:38.991429401Z" level=info msg="StopPodSandbox for \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\"" Mar 17 17:43:38.991582 containerd[1511]: time="2025-03-17T17:43:38.991530672Z" level=info msg="TearDown network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" successfully" Mar 17 17:43:38.991582 containerd[1511]: time="2025-03-17T17:43:38.991577600Z" level=info msg="StopPodSandbox for \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" returns successfully" Mar 17 17:43:38.991916 containerd[1511]: time="2025-03-17T17:43:38.991890819Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\"" Mar 17 17:43:38.991994 containerd[1511]: time="2025-03-17T17:43:38.991974446Z" level=info msg="TearDown network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" successfully" Mar 17 17:43:38.992024 containerd[1511]: time="2025-03-17T17:43:38.991990456Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" returns successfully" Mar 17 17:43:38.992384 containerd[1511]: time="2025-03-17T17:43:38.992346004Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" Mar 17 17:43:38.992464 containerd[1511]: time="2025-03-17T17:43:38.992444680Z" level=info msg="TearDown network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" successfully" Mar 17 17:43:38.992464 containerd[1511]: time="2025-03-17T17:43:38.992460500Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" returns successfully" Mar 17 17:43:38.992712 containerd[1511]: time="2025-03-17T17:43:38.992679532Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:43:38.992780 containerd[1511]: time="2025-03-17T17:43:38.992760514Z" level=info msg="TearDown network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" successfully" Mar 17 17:43:38.992780 containerd[1511]: time="2025-03-17T17:43:38.992772787Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" returns successfully" Mar 17 17:43:38.993162 containerd[1511]: time="2025-03-17T17:43:38.993110442Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:43:38.993307 containerd[1511]: time="2025-03-17T17:43:38.993255474Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:43:38.993307 containerd[1511]: time="2025-03-17T17:43:38.993302924Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:43:38.993726 containerd[1511]: time="2025-03-17T17:43:38.993687165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:6,}" Mar 17 17:43:38.994281 kubelet[2608]: I0317 17:43:38.993919 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c" Mar 17 17:43:38.994489 containerd[1511]: time="2025-03-17T17:43:38.994455590Z" level=info msg="StopPodSandbox for \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\"" Mar 17 17:43:38.994641 containerd[1511]: time="2025-03-17T17:43:38.994621122Z" level=info msg="Ensure that sandbox 03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c in task-service has been cleanup successfully" Mar 17 17:43:38.994824 containerd[1511]: time="2025-03-17T17:43:38.994805038Z" level=info msg="TearDown network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\" successfully" Mar 17 17:43:38.994824 containerd[1511]: time="2025-03-17T17:43:38.994821809Z" level=info msg="StopPodSandbox for \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\" returns successfully" Mar 17 17:43:38.995096 containerd[1511]: time="2025-03-17T17:43:38.995066930Z" level=info msg="StopPodSandbox for \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\"" Mar 17 17:43:38.995670 containerd[1511]: time="2025-03-17T17:43:38.995162259Z" level=info msg="TearDown network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" successfully" Mar 17 17:43:38.995670 containerd[1511]: time="2025-03-17T17:43:38.995179060Z" level=info msg="StopPodSandbox for \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" returns successfully" Mar 17 17:43:38.995670 containerd[1511]: time="2025-03-17T17:43:38.995443557Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\"" Mar 17 17:43:38.995670 containerd[1511]: time="2025-03-17T17:43:38.995554266Z" level=info msg="TearDown network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" successfully" Mar 17 17:43:38.995670 containerd[1511]: time="2025-03-17T17:43:38.995571548Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" returns successfully" Mar 17 17:43:38.995901 containerd[1511]: time="2025-03-17T17:43:38.995877092Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" Mar 17 17:43:38.996153 containerd[1511]: time="2025-03-17T17:43:38.996116182Z" level=info msg="TearDown network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" successfully" Mar 17 17:43:38.996211 containerd[1511]: time="2025-03-17T17:43:38.996169693Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" returns successfully" Mar 17 17:43:38.996804 kubelet[2608]: E0317 17:43:38.996767 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:39.000904 containerd[1511]: time="2025-03-17T17:43:39.000843579Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:43:39.001023 containerd[1511]: time="2025-03-17T17:43:39.000975977Z" level=info msg="TearDown network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" successfully" Mar 17 17:43:39.001023 containerd[1511]: time="2025-03-17T17:43:39.000988822Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" returns successfully" Mar 17 17:43:39.001885 containerd[1511]: time="2025-03-17T17:43:39.001856342Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:43:39.001976 containerd[1511]: time="2025-03-17T17:43:39.001947393Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:43:39.001976 containerd[1511]: time="2025-03-17T17:43:39.001958634Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:43:39.003178 containerd[1511]: time="2025-03-17T17:43:39.003058702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:6,}" Mar 17 17:43:39.004864 kubelet[2608]: I0317 17:43:39.004799 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587" Mar 17 17:43:39.005888 containerd[1511]: time="2025-03-17T17:43:39.005818740Z" level=info msg="StopPodSandbox for \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\"" Mar 17 17:43:39.007331 containerd[1511]: time="2025-03-17T17:43:39.007299513Z" level=info msg="Ensure that sandbox a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587 in task-service has been cleanup successfully" Mar 17 17:43:39.007784 containerd[1511]: time="2025-03-17T17:43:39.007686160Z" level=info msg="TearDown network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\" successfully" Mar 17 17:43:39.007784 containerd[1511]: time="2025-03-17T17:43:39.007705156Z" level=info msg="StopPodSandbox for \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\" returns successfully" Mar 17 17:43:39.008261 containerd[1511]: time="2025-03-17T17:43:39.008118153Z" level=info msg="StopPodSandbox for \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\"" Mar 17 17:43:39.008382 containerd[1511]: time="2025-03-17T17:43:39.008364626Z" level=info msg="TearDown network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" successfully" Mar 17 17:43:39.008591 containerd[1511]: time="2025-03-17T17:43:39.008548381Z" level=info msg="StopPodSandbox for \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" returns successfully" Mar 17 17:43:39.008760 kubelet[2608]: I0317 17:43:39.008718 2608 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2" Mar 17 17:43:39.009014 containerd[1511]: time="2025-03-17T17:43:39.008886527Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\"" Mar 17 17:43:39.009014 containerd[1511]: time="2025-03-17T17:43:39.008965034Z" level=info msg="TearDown network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" successfully" Mar 17 17:43:39.009014 containerd[1511]: time="2025-03-17T17:43:39.008975293Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" returns successfully" Mar 17 17:43:39.009222 containerd[1511]: time="2025-03-17T17:43:39.009192491Z" level=info msg="StopPodSandbox for \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\"" Mar 17 17:43:39.009436 containerd[1511]: time="2025-03-17T17:43:39.009416102Z" level=info msg="Ensure that sandbox d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2 in task-service has been cleanup successfully" Mar 17 17:43:39.009732 containerd[1511]: time="2025-03-17T17:43:39.009656654Z" level=info msg="TearDown network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\" successfully" Mar 17 17:43:39.009732 containerd[1511]: time="2025-03-17T17:43:39.009720835Z" level=info msg="StopPodSandbox for \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\" returns successfully" Mar 17 17:43:39.010025 containerd[1511]: time="2025-03-17T17:43:39.009870776Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" Mar 17 17:43:39.010050 containerd[1511]: time="2025-03-17T17:43:39.010022802Z" level=info msg="StopPodSandbox for \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\"" Mar 17 17:43:39.010651 containerd[1511]: time="2025-03-17T17:43:39.010110657Z" level=info msg="TearDown network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" successfully" Mar 17 17:43:39.010651 containerd[1511]: time="2025-03-17T17:43:39.010165261Z" level=info msg="StopPodSandbox for \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" returns successfully" Mar 17 17:43:39.010651 containerd[1511]: time="2025-03-17T17:43:39.010433755Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\"" Mar 17 17:43:39.010651 containerd[1511]: time="2025-03-17T17:43:39.010523844Z" level=info msg="TearDown network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" successfully" Mar 17 17:43:39.010651 containerd[1511]: time="2025-03-17T17:43:39.010535235Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" returns successfully" Mar 17 17:43:39.010843 containerd[1511]: time="2025-03-17T17:43:39.010768153Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" Mar 17 17:43:39.010877 containerd[1511]: time="2025-03-17T17:43:39.010864675Z" level=info msg="TearDown network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" successfully" Mar 17 17:43:39.010912 containerd[1511]: time="2025-03-17T17:43:39.010877379Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" returns successfully" Mar 17 17:43:39.011123 containerd[1511]: time="2025-03-17T17:43:39.011087964Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:43:39.011228 containerd[1511]: time="2025-03-17T17:43:39.011181561Z" level=info msg="TearDown network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" successfully" Mar 17 17:43:39.011228 containerd[1511]: time="2025-03-17T17:43:39.011199434Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" returns successfully" Mar 17 17:43:39.011878 containerd[1511]: time="2025-03-17T17:43:39.011854906Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:43:39.012004 containerd[1511]: time="2025-03-17T17:43:39.011970353Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:43:39.012004 containerd[1511]: time="2025-03-17T17:43:39.011981504Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:43:39.012299 kubelet[2608]: E0317 17:43:39.012268 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:39.012754 containerd[1511]: time="2025-03-17T17:43:39.012590429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:6,}" Mar 17 17:43:39.019054 containerd[1511]: time="2025-03-17T17:43:39.018897283Z" level=info msg="TearDown network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" successfully" Mar 17 17:43:39.019054 containerd[1511]: time="2025-03-17T17:43:39.018926448Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" returns successfully" Mar 17 17:43:39.019346 containerd[1511]: time="2025-03-17T17:43:39.019304539Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:43:39.019434 containerd[1511]: time="2025-03-17T17:43:39.019405949Z" level=info msg="TearDown network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" successfully" Mar 17 17:43:39.019434 containerd[1511]: time="2025-03-17T17:43:39.019425055Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" returns successfully" Mar 17 17:43:39.019731 containerd[1511]: time="2025-03-17T17:43:39.019709209Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:43:39.019815 containerd[1511]: time="2025-03-17T17:43:39.019794089Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:43:39.019815 containerd[1511]: time="2025-03-17T17:43:39.019808736Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:43:39.020388 containerd[1511]: time="2025-03-17T17:43:39.020340115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:6,}" Mar 17 17:43:39.231602 systemd[1]: run-netns-cni\x2d4a56454d\x2dfca9\x2d322a\x2d5089\x2d88d005ab0d87.mount: Deactivated successfully. Mar 17 17:43:39.231714 systemd[1]: run-netns-cni\x2d7e69cb3b\x2d6622\x2d1d34\x2d0bcc\x2d4ef526154734.mount: Deactivated successfully. Mar 17 17:43:39.231788 systemd[1]: run-netns-cni\x2df6883836\x2d483d\x2d2a19\x2d8633\x2da2685ec399d4.mount: Deactivated successfully. Mar 17 17:43:39.231871 systemd[1]: run-netns-cni\x2d38305f81\x2dc3a4\x2dff76\x2d7cb1\x2da4740dbce94c.mount: Deactivated successfully. Mar 17 17:43:39.231947 systemd[1]: run-netns-cni\x2da3baf4f3\x2d631b\x2d9624\x2d4fc4\x2d57938c0a092b.mount: Deactivated successfully. Mar 17 17:43:39.232026 systemd[1]: run-netns-cni\x2d78e3ee33\x2dc0ca\x2d5ca2\x2dcdf7\x2d96e5eeba7cc2.mount: Deactivated successfully. Mar 17 17:43:39.232098 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919-shm.mount: Deactivated successfully. Mar 17 17:43:39.479040 systemd-networkd[1424]: cali1f184a4eeaa: Link UP Mar 17 17:43:39.479719 systemd-networkd[1424]: cali1f184a4eeaa: Gained carrier Mar 17 17:43:39.488884 kubelet[2608]: I0317 17:43:39.488309 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hkj5m" podStartSLOduration=2.667785727 podStartE2EDuration="20.488289018s" podCreationTimestamp="2025-03-17 17:43:19 +0000 UTC" firstStartedPulling="2025-03-17 17:43:20.050868147 +0000 UTC m=+16.460227960" lastFinishedPulling="2025-03-17 17:43:37.871371437 +0000 UTC m=+34.280731251" observedRunningTime="2025-03-17 17:43:39.060961447 +0000 UTC m=+35.470321290" watchObservedRunningTime="2025-03-17 17:43:39.488289018 +0000 UTC m=+35.897648832" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.199 [INFO][4870] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.308 [INFO][4870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--j8ss7-eth0 csi-node-driver- calico-system 8ed5b12a-6d88-43a8-8215-c1e4e9724067 647 0 2025-03-17 17:43:19 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-j8ss7 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1f184a4eeaa [] []}} ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.308 [INFO][4870] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-eth0" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.398 [INFO][4937] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" HandleID="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Workload="localhost-k8s-csi--node--driver--j8ss7-eth0" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.407 [INFO][4937] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" HandleID="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Workload="localhost-k8s-csi--node--driver--j8ss7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265d20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-j8ss7", "timestamp":"2025-03-17 17:43:39.398081443 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.407 [INFO][4937] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.407 [INFO][4937] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.407 [INFO][4937] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.410 [INFO][4937] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.414 [INFO][4937] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.418 [INFO][4937] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.420 [INFO][4937] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.421 [INFO][4937] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.421 [INFO][4937] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.422 [INFO][4937] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9 Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.456 [INFO][4937] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.462 [INFO][4937] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.462 [INFO][4937] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" host="localhost" Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.462 [INFO][4937] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:43:39.491809 containerd[1511]: 2025-03-17 17:43:39.462 [INFO][4937] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" HandleID="k8s-pod-network.a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Workload="localhost-k8s-csi--node--driver--j8ss7-eth0" Mar 17 17:43:39.492597 containerd[1511]: 2025-03-17 17:43:39.466 [INFO][4870] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j8ss7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ed5b12a-6d88-43a8-8215-c1e4e9724067", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-j8ss7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f184a4eeaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.492597 containerd[1511]: 2025-03-17 17:43:39.466 [INFO][4870] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-eth0" Mar 17 17:43:39.492597 containerd[1511]: 2025-03-17 17:43:39.466 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1f184a4eeaa ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-eth0" Mar 17 17:43:39.492597 containerd[1511]: 2025-03-17 17:43:39.479 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-eth0" Mar 17 17:43:39.492597 containerd[1511]: 2025-03-17 17:43:39.479 [INFO][4870] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--j8ss7-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8ed5b12a-6d88-43a8-8215-c1e4e9724067", ResourceVersion:"647", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9", Pod:"csi-node-driver-j8ss7", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1f184a4eeaa", MAC:"36:1d:69:0e:4b:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.492597 containerd[1511]: 2025-03-17 17:43:39.488 [INFO][4870] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9" Namespace="calico-system" Pod="csi-node-driver-j8ss7" WorkloadEndpoint="localhost-k8s-csi--node--driver--j8ss7-eth0" Mar 17 17:43:39.541027 systemd-networkd[1424]: calic065378b178: Link UP Mar 17 17:43:39.541864 systemd-networkd[1424]: calic065378b178: Gained carrier Mar 17 17:43:39.543206 containerd[1511]: time="2025-03-17T17:43:39.542976670Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:39.543206 containerd[1511]: time="2025-03-17T17:43:39.543049087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:39.543206 containerd[1511]: time="2025-03-17T17:43:39.543059637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.543206 containerd[1511]: time="2025-03-17T17:43:39.543135058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.218 [INFO][4880] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.309 [INFO][4880] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0 calico-apiserver-6cbddc9666- calico-apiserver c27ab8c8-4886-4cfa-ac38-ef82b827b394 748 0 2025-03-17 17:43:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbddc9666 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cbddc9666-8jnnv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic065378b178 [] []}} ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.309 [INFO][4880] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.394 [INFO][4940] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" HandleID="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Workload="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.409 [INFO][4940] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" HandleID="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Workload="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c4a80), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cbddc9666-8jnnv", "timestamp":"2025-03-17 17:43:39.394216198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.409 [INFO][4940] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.462 [INFO][4940] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.462 [INFO][4940] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.511 [INFO][4940] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.516 [INFO][4940] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.519 [INFO][4940] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.521 [INFO][4940] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.523 [INFO][4940] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.523 [INFO][4940] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.524 [INFO][4940] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.528 [INFO][4940] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.535 [INFO][4940] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.535 [INFO][4940] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" host="localhost" Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.535 [INFO][4940] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:43:39.561099 containerd[1511]: 2025-03-17 17:43:39.535 [INFO][4940] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" HandleID="k8s-pod-network.9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Workload="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" Mar 17 17:43:39.561689 containerd[1511]: 2025-03-17 17:43:39.538 [INFO][4880] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0", GenerateName:"calico-apiserver-6cbddc9666-", Namespace:"calico-apiserver", SelfLink:"", UID:"c27ab8c8-4886-4cfa-ac38-ef82b827b394", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbddc9666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cbddc9666-8jnnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic065378b178", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.561689 containerd[1511]: 2025-03-17 17:43:39.538 [INFO][4880] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" Mar 17 17:43:39.561689 containerd[1511]: 2025-03-17 17:43:39.538 [INFO][4880] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic065378b178 ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" Mar 17 17:43:39.561689 containerd[1511]: 2025-03-17 17:43:39.541 [INFO][4880] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" Mar 17 17:43:39.561689 containerd[1511]: 2025-03-17 17:43:39.542 [INFO][4880] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0", GenerateName:"calico-apiserver-6cbddc9666-", Namespace:"calico-apiserver", SelfLink:"", UID:"c27ab8c8-4886-4cfa-ac38-ef82b827b394", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbddc9666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e", Pod:"calico-apiserver-6cbddc9666-8jnnv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic065378b178", MAC:"86:44:49:f2:84:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.561689 containerd[1511]: 2025-03-17 17:43:39.557 [INFO][4880] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-8jnnv" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--8jnnv-eth0" Mar 17 17:43:39.576396 systemd[1]: Started cri-containerd-a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9.scope - libcontainer container a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9. Mar 17 17:43:39.586684 containerd[1511]: time="2025-03-17T17:43:39.585761960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:39.586889 containerd[1511]: time="2025-03-17T17:43:39.586856508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:39.586987 containerd[1511]: time="2025-03-17T17:43:39.586965011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.587142 containerd[1511]: time="2025-03-17T17:43:39.587116667Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.592472 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:43:39.607466 containerd[1511]: time="2025-03-17T17:43:39.607427322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-j8ss7,Uid:8ed5b12a-6d88-43a8-8215-c1e4e9724067,Namespace:calico-system,Attempt:6,} returns sandbox id \"a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9\"" Mar 17 17:43:39.608912 containerd[1511]: time="2025-03-17T17:43:39.608805673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:43:39.613517 systemd[1]: Started cri-containerd-9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e.scope - libcontainer container 9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e. Mar 17 17:43:39.627575 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:43:39.638043 systemd-networkd[1424]: cali92c0b90e5f0: Link UP Mar 17 17:43:39.638735 systemd-networkd[1424]: cali92c0b90e5f0: Gained carrier Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.186 [INFO][4858] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.308 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0 calico-kube-controllers-86f7c466bd- calico-system bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7 745 0 2025-03-17 17:43:19 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86f7c466bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-86f7c466bd-bf4pr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92c0b90e5f0 [] []}} ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.308 [INFO][4858] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.399 [INFO][4932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" HandleID="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Workload="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.409 [INFO][4932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" HandleID="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Workload="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f5450), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-86f7c466bd-bf4pr", "timestamp":"2025-03-17 17:43:39.399880925 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.409 [INFO][4932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.535 [INFO][4932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.535 [INFO][4932] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.611 [INFO][4932] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.616 [INFO][4932] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.620 [INFO][4932] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.621 [INFO][4932] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.623 [INFO][4932] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.623 [INFO][4932] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.624 [INFO][4932] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.628 [INFO][4932] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.632 [INFO][4932] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.632 [INFO][4932] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" host="localhost" Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.632 [INFO][4932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:43:39.653285 containerd[1511]: 2025-03-17 17:43:39.632 [INFO][4932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" HandleID="k8s-pod-network.7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Workload="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" Mar 17 17:43:39.655706 containerd[1511]: 2025-03-17 17:43:39.636 [INFO][4858] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0", GenerateName:"calico-kube-controllers-86f7c466bd-", Namespace:"calico-system", SelfLink:"", UID:"bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f7c466bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-86f7c466bd-bf4pr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92c0b90e5f0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.655706 containerd[1511]: 2025-03-17 17:43:39.636 [INFO][4858] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" Mar 17 17:43:39.655706 containerd[1511]: 2025-03-17 17:43:39.636 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92c0b90e5f0 ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" Mar 17 17:43:39.655706 containerd[1511]: 2025-03-17 17:43:39.638 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" Mar 17 17:43:39.655706 containerd[1511]: 2025-03-17 17:43:39.638 [INFO][4858] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0", GenerateName:"calico-kube-controllers-86f7c466bd-", Namespace:"calico-system", SelfLink:"", UID:"bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7", ResourceVersion:"745", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86f7c466bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df", Pod:"calico-kube-controllers-86f7c466bd-bf4pr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92c0b90e5f0", MAC:"4e:76:7c:52:62:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.655706 containerd[1511]: 2025-03-17 17:43:39.649 [INFO][4858] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df" Namespace="calico-system" Pod="calico-kube-controllers-86f7c466bd-bf4pr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--86f7c466bd--bf4pr-eth0" Mar 17 17:43:39.667202 containerd[1511]: time="2025-03-17T17:43:39.667145029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-8jnnv,Uid:c27ab8c8-4886-4cfa-ac38-ef82b827b394,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e\"" Mar 17 17:43:39.700990 containerd[1511]: time="2025-03-17T17:43:39.700764311Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:39.701428 containerd[1511]: time="2025-03-17T17:43:39.700869448Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:39.701428 containerd[1511]: time="2025-03-17T17:43:39.700902650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.701428 containerd[1511]: time="2025-03-17T17:43:39.701111182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.734996 systemd[1]: Started cri-containerd-7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df.scope - libcontainer container 7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df. Mar 17 17:43:39.776321 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:43:39.777011 systemd-networkd[1424]: cali95d79fd5d56: Link UP Mar 17 17:43:39.783675 systemd-networkd[1424]: cali95d79fd5d56: Gained carrier Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.169 [INFO][4848] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.308 [INFO][4848] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--wp522-eth0 coredns-6f6b679f8f- kube-system 92c87a7e-fef7-4c26-ab3b-4e94dca0e582 747 0 2025-03-17 17:43:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-wp522 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali95d79fd5d56 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.309 [INFO][4848] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.401 [INFO][4936] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" HandleID="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Workload="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.410 [INFO][4936] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" HandleID="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Workload="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000399440), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-wp522", "timestamp":"2025-03-17 17:43:39.400600177 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.410 [INFO][4936] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.632 [INFO][4936] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.633 [INFO][4936] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.714 [INFO][4936] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.732 [INFO][4936] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.739 [INFO][4936] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.741 [INFO][4936] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.748 [INFO][4936] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.748 [INFO][4936] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.750 [INFO][4936] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.756 [INFO][4936] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.761 [INFO][4936] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.761 [INFO][4936] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" host="localhost" Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.762 [INFO][4936] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:43:39.811203 containerd[1511]: 2025-03-17 17:43:39.762 [INFO][4936] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" HandleID="k8s-pod-network.a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Workload="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" Mar 17 17:43:39.811950 containerd[1511]: 2025-03-17 17:43:39.768 [INFO][4848] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wp522-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"92c87a7e-fef7-4c26-ab3b-4e94dca0e582", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-wp522", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95d79fd5d56", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.811950 containerd[1511]: 2025-03-17 17:43:39.769 [INFO][4848] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" Mar 17 17:43:39.811950 containerd[1511]: 2025-03-17 17:43:39.769 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali95d79fd5d56 ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" Mar 17 17:43:39.811950 containerd[1511]: 2025-03-17 17:43:39.781 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" Mar 17 17:43:39.811950 containerd[1511]: 2025-03-17 17:43:39.788 [INFO][4848] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--wp522-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"92c87a7e-fef7-4c26-ab3b-4e94dca0e582", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a", Pod:"coredns-6f6b679f8f-wp522", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali95d79fd5d56", MAC:"56:df:ee:fb:8f:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.811950 containerd[1511]: 2025-03-17 17:43:39.807 [INFO][4848] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a" Namespace="kube-system" Pod="coredns-6f6b679f8f-wp522" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--wp522-eth0" Mar 17 17:43:39.835594 containerd[1511]: time="2025-03-17T17:43:39.835461629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86f7c466bd-bf4pr,Uid:bb999740-3eb7-4d5d-b75d-a6ed26b4fcf7,Namespace:calico-system,Attempt:6,} returns sandbox id \"7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df\"" Mar 17 17:43:39.847724 containerd[1511]: time="2025-03-17T17:43:39.847636263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:39.848274 containerd[1511]: time="2025-03-17T17:43:39.848139479Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:39.848274 containerd[1511]: time="2025-03-17T17:43:39.848215653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.849953 containerd[1511]: time="2025-03-17T17:43:39.848874871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.871464 systemd[1]: Started cri-containerd-a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a.scope - libcontainer container a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a. Mar 17 17:43:39.872229 systemd-networkd[1424]: cali73455d44653: Link UP Mar 17 17:43:39.872444 systemd-networkd[1424]: cali73455d44653: Gained carrier Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.197 [INFO][4869] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.309 [INFO][4869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0 coredns-6f6b679f8f- kube-system 9de23c76-70f0-4fa7-aa26-f471719ff480 743 0 2025-03-17 17:43:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-qmtlq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali73455d44653 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.309 [INFO][4869] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.396 [INFO][4941] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" HandleID="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Workload="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.411 [INFO][4941] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" HandleID="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Workload="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c0750), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-qmtlq", "timestamp":"2025-03-17 17:43:39.396845871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.411 [INFO][4941] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.762 [INFO][4941] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.762 [INFO][4941] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.812 [INFO][4941] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.833 [INFO][4941] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.840 [INFO][4941] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.843 [INFO][4941] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.846 [INFO][4941] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.846 [INFO][4941] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.847 [INFO][4941] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.853 [INFO][4941] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.861 [INFO][4941] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.862 [INFO][4941] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" host="localhost" Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.862 [INFO][4941] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:43:39.888215 containerd[1511]: 2025-03-17 17:43:39.862 [INFO][4941] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" HandleID="k8s-pod-network.5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Workload="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" Mar 17 17:43:39.888791 containerd[1511]: 2025-03-17 17:43:39.869 [INFO][4869] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9de23c76-70f0-4fa7-aa26-f471719ff480", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-qmtlq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73455d44653", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.888791 containerd[1511]: 2025-03-17 17:43:39.869 [INFO][4869] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" Mar 17 17:43:39.888791 containerd[1511]: 2025-03-17 17:43:39.869 [INFO][4869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali73455d44653 ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" Mar 17 17:43:39.888791 containerd[1511]: 2025-03-17 17:43:39.872 [INFO][4869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" Mar 17 17:43:39.888791 containerd[1511]: 2025-03-17 17:43:39.872 [INFO][4869] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"9de23c76-70f0-4fa7-aa26-f471719ff480", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c", Pod:"coredns-6f6b679f8f-qmtlq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali73455d44653", MAC:"d2:9f:0a:fb:9c:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.888791 containerd[1511]: 2025-03-17 17:43:39.884 [INFO][4869] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c" Namespace="kube-system" Pod="coredns-6f6b679f8f-qmtlq" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--qmtlq-eth0" Mar 17 17:43:39.892942 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:43:39.922207 containerd[1511]: time="2025-03-17T17:43:39.922092953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:39.922379 containerd[1511]: time="2025-03-17T17:43:39.922293931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:39.922379 containerd[1511]: time="2025-03-17T17:43:39.922349215Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.922532 containerd[1511]: time="2025-03-17T17:43:39.922500639Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:39.926767 containerd[1511]: time="2025-03-17T17:43:39.926720470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-wp522,Uid:92c87a7e-fef7-4c26-ab3b-4e94dca0e582,Namespace:kube-system,Attempt:6,} returns sandbox id \"a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a\"" Mar 17 17:43:39.928012 kubelet[2608]: E0317 17:43:39.927978 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:39.930775 containerd[1511]: time="2025-03-17T17:43:39.930732723Z" level=info msg="CreateContainer within sandbox \"a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:43:39.946558 systemd[1]: Started cri-containerd-5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c.scope - libcontainer container 5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c. Mar 17 17:43:39.956503 containerd[1511]: time="2025-03-17T17:43:39.956455832Z" level=info msg="CreateContainer within sandbox \"a8e9fc96c314004366b54c8cc19235c26fc854af8f7e1431b932e82f573e854a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc22e05f67d20323292f0f8863d5de81190cdcb1d06aab87a6a43960cda8f272\"" Mar 17 17:43:39.958978 containerd[1511]: time="2025-03-17T17:43:39.958125830Z" level=info msg="StartContainer for \"cc22e05f67d20323292f0f8863d5de81190cdcb1d06aab87a6a43960cda8f272\"" Mar 17 17:43:39.963832 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:43:39.966946 systemd-networkd[1424]: cali2e844faf153: Link UP Mar 17 17:43:39.968858 systemd-networkd[1424]: cali2e844faf153: Gained carrier Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.214 [INFO][4905] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.308 [INFO][4905] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0 calico-apiserver-6cbddc9666- calico-apiserver b74841bb-3d21-4734-85da-f48ab60f9d98 746 0 2025-03-17 17:43:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cbddc9666 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6cbddc9666-fgfrs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2e844faf153 [] []}} ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.309 [INFO][4905] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.406 [INFO][4934] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" HandleID="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Workload="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.413 [INFO][4934] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" HandleID="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Workload="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f95e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6cbddc9666-fgfrs", "timestamp":"2025-03-17 17:43:39.404644228 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.413 [INFO][4934] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.862 [INFO][4934] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.862 [INFO][4934] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.914 [INFO][4934] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.935 [INFO][4934] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.946 [INFO][4934] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.947 [INFO][4934] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.949 [INFO][4934] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.949 [INFO][4934] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.950 [INFO][4934] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89 Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.954 [INFO][4934] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.960 [INFO][4934] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.960 [INFO][4934] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" host="localhost" Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.960 [INFO][4934] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:43:39.988417 containerd[1511]: 2025-03-17 17:43:39.960 [INFO][4934] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" HandleID="k8s-pod-network.bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Workload="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" Mar 17 17:43:39.989278 containerd[1511]: 2025-03-17 17:43:39.964 [INFO][4905] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0", GenerateName:"calico-apiserver-6cbddc9666-", Namespace:"calico-apiserver", SelfLink:"", UID:"b74841bb-3d21-4734-85da-f48ab60f9d98", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbddc9666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6cbddc9666-fgfrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e844faf153", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.989278 containerd[1511]: 2025-03-17 17:43:39.965 [INFO][4905] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" Mar 17 17:43:39.989278 containerd[1511]: 2025-03-17 17:43:39.965 [INFO][4905] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2e844faf153 ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" Mar 17 17:43:39.989278 containerd[1511]: 2025-03-17 17:43:39.967 [INFO][4905] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" Mar 17 17:43:39.989278 containerd[1511]: 2025-03-17 17:43:39.968 [INFO][4905] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0", GenerateName:"calico-apiserver-6cbddc9666-", Namespace:"calico-apiserver", SelfLink:"", UID:"b74841bb-3d21-4734-85da-f48ab60f9d98", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 43, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cbddc9666", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89", Pod:"calico-apiserver-6cbddc9666-fgfrs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2e844faf153", MAC:"2e:fe:85:fa:db:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:43:39.989278 containerd[1511]: 2025-03-17 17:43:39.984 [INFO][4905] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89" Namespace="calico-apiserver" Pod="calico-apiserver-6cbddc9666-fgfrs" WorkloadEndpoint="localhost-k8s-calico--apiserver--6cbddc9666--fgfrs-eth0" Mar 17 17:43:39.998413 systemd[1]: Started cri-containerd-cc22e05f67d20323292f0f8863d5de81190cdcb1d06aab87a6a43960cda8f272.scope - libcontainer container cc22e05f67d20323292f0f8863d5de81190cdcb1d06aab87a6a43960cda8f272. Mar 17 17:43:39.999710 containerd[1511]: time="2025-03-17T17:43:39.999664558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-qmtlq,Uid:9de23c76-70f0-4fa7-aa26-f471719ff480,Namespace:kube-system,Attempt:6,} returns sandbox id \"5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c\"" Mar 17 17:43:40.000500 kubelet[2608]: E0317 17:43:40.000472 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:40.003401 containerd[1511]: time="2025-03-17T17:43:40.003356276Z" level=info msg="CreateContainer within sandbox \"5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:43:40.020581 containerd[1511]: time="2025-03-17T17:43:40.020333382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:43:40.020581 containerd[1511]: time="2025-03-17T17:43:40.020407812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:43:40.020581 containerd[1511]: time="2025-03-17T17:43:40.020421738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:40.020927 containerd[1511]: time="2025-03-17T17:43:40.020612497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:43:40.027793 kubelet[2608]: E0317 17:43:40.027546 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:40.030335 containerd[1511]: time="2025-03-17T17:43:40.030117099Z" level=info msg="CreateContainer within sandbox \"5714c9b2c4bd663b9da04d6cb44701f88102da7ef5b64639093a7da9740a2c1c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c597113bb5c692b9b45e8a5f7890e47db59a20ce758bea6b761adb41a8cf9550\"" Mar 17 17:43:40.031050 containerd[1511]: time="2025-03-17T17:43:40.031008304Z" level=info msg="StartContainer for \"c597113bb5c692b9b45e8a5f7890e47db59a20ce758bea6b761adb41a8cf9550\"" Mar 17 17:43:40.044827 systemd[1]: Started cri-containerd-bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89.scope - libcontainer container bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89. Mar 17 17:43:40.060317 containerd[1511]: time="2025-03-17T17:43:40.060173976Z" level=info msg="StartContainer for \"cc22e05f67d20323292f0f8863d5de81190cdcb1d06aab87a6a43960cda8f272\" returns successfully" Mar 17 17:43:40.062850 systemd-resolved[1339]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:43:40.072550 systemd[1]: Started cri-containerd-c597113bb5c692b9b45e8a5f7890e47db59a20ce758bea6b761adb41a8cf9550.scope - libcontainer container c597113bb5c692b9b45e8a5f7890e47db59a20ce758bea6b761adb41a8cf9550. Mar 17 17:43:40.103350 containerd[1511]: time="2025-03-17T17:43:40.103312789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cbddc9666-fgfrs,Uid:b74841bb-3d21-4734-85da-f48ab60f9d98,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89\"" Mar 17 17:43:40.114369 containerd[1511]: time="2025-03-17T17:43:40.114319695Z" level=info msg="StartContainer for \"c597113bb5c692b9b45e8a5f7890e47db59a20ce758bea6b761adb41a8cf9550\" returns successfully" Mar 17 17:43:40.761556 systemd-networkd[1424]: cali92c0b90e5f0: Gained IPv6LL Mar 17 17:43:40.889395 systemd-networkd[1424]: cali1f184a4eeaa: Gained IPv6LL Mar 17 17:43:41.033769 kubelet[2608]: E0317 17:43:41.033621 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:41.037819 kubelet[2608]: E0317 17:43:41.037780 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:41.046593 kubelet[2608]: I0317 17:43:41.045831 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-qmtlq" podStartSLOduration=33.045809388 podStartE2EDuration="33.045809388s" podCreationTimestamp="2025-03-17 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:41.04508113 +0000 UTC m=+37.454440944" watchObservedRunningTime="2025-03-17 17:43:41.045809388 +0000 UTC m=+37.455169201" Mar 17 17:43:41.070670 kubelet[2608]: I0317 17:43:41.070475 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-wp522" podStartSLOduration=33.070180144 podStartE2EDuration="33.070180144s" podCreationTimestamp="2025-03-17 17:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:43:41.056615745 +0000 UTC m=+37.465975648" watchObservedRunningTime="2025-03-17 17:43:41.070180144 +0000 UTC m=+37.479539957" Mar 17 17:43:41.081720 systemd-networkd[1424]: cali73455d44653: Gained IPv6LL Mar 17 17:43:41.256808 containerd[1511]: time="2025-03-17T17:43:41.256743525Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:41.257555 containerd[1511]: time="2025-03-17T17:43:41.257518932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:43:41.258751 containerd[1511]: time="2025-03-17T17:43:41.258719668Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:41.260771 containerd[1511]: time="2025-03-17T17:43:41.260740445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:41.261455 containerd[1511]: time="2025-03-17T17:43:41.261412278Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.652556441s" Mar 17 17:43:41.261490 containerd[1511]: time="2025-03-17T17:43:41.261453155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:43:41.262711 containerd[1511]: time="2025-03-17T17:43:41.262398931Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:43:41.263434 containerd[1511]: time="2025-03-17T17:43:41.263410522Z" level=info msg="CreateContainer within sandbox \"a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:43:41.275424 systemd-networkd[1424]: cali95d79fd5d56: Gained IPv6LL Mar 17 17:43:41.289096 containerd[1511]: time="2025-03-17T17:43:41.288980241Z" level=info msg="CreateContainer within sandbox \"a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0d6d0adb4f3fb89d3aa2833e7dc05855ef20b592de5f1dd44f78bc880422258a\"" Mar 17 17:43:41.289858 containerd[1511]: time="2025-03-17T17:43:41.289636905Z" level=info msg="StartContainer for \"0d6d0adb4f3fb89d3aa2833e7dc05855ef20b592de5f1dd44f78bc880422258a\"" Mar 17 17:43:41.325449 systemd[1]: Started cri-containerd-0d6d0adb4f3fb89d3aa2833e7dc05855ef20b592de5f1dd44f78bc880422258a.scope - libcontainer container 0d6d0adb4f3fb89d3aa2833e7dc05855ef20b592de5f1dd44f78bc880422258a. Mar 17 17:43:41.361272 containerd[1511]: time="2025-03-17T17:43:41.361193316Z" level=info msg="StartContainer for \"0d6d0adb4f3fb89d3aa2833e7dc05855ef20b592de5f1dd44f78bc880422258a\" returns successfully" Mar 17 17:43:41.401484 systemd-networkd[1424]: calic065378b178: Gained IPv6LL Mar 17 17:43:41.785504 systemd-networkd[1424]: cali2e844faf153: Gained IPv6LL Mar 17 17:43:42.043416 kubelet[2608]: E0317 17:43:42.043229 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:42.044023 kubelet[2608]: E0317 17:43:42.043435 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:43.045567 kubelet[2608]: E0317 17:43:43.045518 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:43.046032 kubelet[2608]: E0317 17:43:43.045582 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:43.556639 systemd[1]: Started sshd@9-10.0.0.14:22-10.0.0.1:57844.service - OpenSSH per-connection server daemon (10.0.0.1:57844). Mar 17 17:43:43.587834 kubelet[2608]: I0317 17:43:43.587794 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:43:43.588352 kubelet[2608]: E0317 17:43:43.588203 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:43.615953 sshd[5624]: Accepted publickey for core from 10.0.0.1 port 57844 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:43:43.618355 sshd-session[5624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:43.629599 systemd-logind[1494]: New session 10 of user core. Mar 17 17:43:43.637859 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:43:43.786132 sshd[5628]: Connection closed by 10.0.0.1 port 57844 Mar 17 17:43:43.786569 sshd-session[5624]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:43.790409 systemd[1]: sshd@9-10.0.0.14:22-10.0.0.1:57844.service: Deactivated successfully. Mar 17 17:43:43.794168 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:43:43.796740 systemd-logind[1494]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:43:43.798517 systemd-logind[1494]: Removed session 10. Mar 17 17:43:44.053223 kubelet[2608]: E0317 17:43:44.053180 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:44.146293 kernel: bpftool[5707]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:43:44.296354 containerd[1511]: time="2025-03-17T17:43:44.296303047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:44.297235 containerd[1511]: time="2025-03-17T17:43:44.297205442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 17 17:43:44.298597 containerd[1511]: time="2025-03-17T17:43:44.298531403Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:44.301454 containerd[1511]: time="2025-03-17T17:43:44.301400903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:44.301993 containerd[1511]: time="2025-03-17T17:43:44.301961797Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 3.039537668s" Mar 17 17:43:44.302054 containerd[1511]: time="2025-03-17T17:43:44.301993026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:43:44.303814 containerd[1511]: time="2025-03-17T17:43:44.303611937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:43:44.304873 containerd[1511]: time="2025-03-17T17:43:44.304836246Z" level=info msg="CreateContainer within sandbox \"9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:43:44.323222 containerd[1511]: time="2025-03-17T17:43:44.323160968Z" level=info msg="CreateContainer within sandbox \"9c04aba957ad78ee15946206524797e571ffe1d57450fe2ddb40591fe5cf6a4e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e999efbe18c1b672ea6cd16ee1a864391ea4d518d69e77b819420c2813a14918\"" Mar 17 17:43:44.323716 containerd[1511]: time="2025-03-17T17:43:44.323661859Z" level=info msg="StartContainer for \"e999efbe18c1b672ea6cd16ee1a864391ea4d518d69e77b819420c2813a14918\"" Mar 17 17:43:44.357460 systemd[1]: Started cri-containerd-e999efbe18c1b672ea6cd16ee1a864391ea4d518d69e77b819420c2813a14918.scope - libcontainer container e999efbe18c1b672ea6cd16ee1a864391ea4d518d69e77b819420c2813a14918. Mar 17 17:43:44.397044 systemd-networkd[1424]: vxlan.calico: Link UP Mar 17 17:43:44.397056 systemd-networkd[1424]: vxlan.calico: Gained carrier Mar 17 17:43:44.419695 containerd[1511]: time="2025-03-17T17:43:44.419452837Z" level=info msg="StartContainer for \"e999efbe18c1b672ea6cd16ee1a864391ea4d518d69e77b819420c2813a14918\" returns successfully" Mar 17 17:43:45.068793 kubelet[2608]: I0317 17:43:45.068292 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cbddc9666-8jnnv" podStartSLOduration=21.434048381 podStartE2EDuration="26.068271481s" podCreationTimestamp="2025-03-17 17:43:19 +0000 UTC" firstStartedPulling="2025-03-17 17:43:39.668650459 +0000 UTC m=+36.078010272" lastFinishedPulling="2025-03-17 17:43:44.302873559 +0000 UTC m=+40.712233372" observedRunningTime="2025-03-17 17:43:45.067846022 +0000 UTC m=+41.477205855" watchObservedRunningTime="2025-03-17 17:43:45.068271481 +0000 UTC m=+41.477631294" Mar 17 17:43:46.060384 kubelet[2608]: I0317 17:43:46.060340 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:43:46.073416 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Mar 17 17:43:46.704661 containerd[1511]: time="2025-03-17T17:43:46.704588102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:46.705431 containerd[1511]: time="2025-03-17T17:43:46.705376442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 17 17:43:46.707296 containerd[1511]: time="2025-03-17T17:43:46.707266732Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:46.709523 containerd[1511]: time="2025-03-17T17:43:46.709490629Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:46.710072 containerd[1511]: time="2025-03-17T17:43:46.710038177Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.406398087s" Mar 17 17:43:46.710072 containerd[1511]: time="2025-03-17T17:43:46.710064547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 17 17:43:46.710931 containerd[1511]: time="2025-03-17T17:43:46.710891510Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:43:46.718649 containerd[1511]: time="2025-03-17T17:43:46.718484550Z" level=info msg="CreateContainer within sandbox \"7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:43:46.733860 containerd[1511]: time="2025-03-17T17:43:46.733806476Z" level=info msg="CreateContainer within sandbox \"7c33bda6b4a4167d8fed9d06828ce10cd2144e893db16b3359c90a1fd82241df\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"188737e0520e0a3b44c565a1df59e3b9061ca333d133be2888a2fefdd49b510f\"" Mar 17 17:43:46.734270 containerd[1511]: time="2025-03-17T17:43:46.734230121Z" level=info msg="StartContainer for \"188737e0520e0a3b44c565a1df59e3b9061ca333d133be2888a2fefdd49b510f\"" Mar 17 17:43:46.768467 systemd[1]: Started cri-containerd-188737e0520e0a3b44c565a1df59e3b9061ca333d133be2888a2fefdd49b510f.scope - libcontainer container 188737e0520e0a3b44c565a1df59e3b9061ca333d133be2888a2fefdd49b510f. Mar 17 17:43:46.810497 containerd[1511]: time="2025-03-17T17:43:46.810442794Z" level=info msg="StartContainer for \"188737e0520e0a3b44c565a1df59e3b9061ca333d133be2888a2fefdd49b510f\" returns successfully" Mar 17 17:43:47.077058 kubelet[2608]: I0317 17:43:47.076903 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-86f7c466bd-bf4pr" podStartSLOduration=21.204862611 podStartE2EDuration="28.07688167s" podCreationTimestamp="2025-03-17 17:43:19 +0000 UTC" firstStartedPulling="2025-03-17 17:43:39.838765269 +0000 UTC m=+36.248125072" lastFinishedPulling="2025-03-17 17:43:46.710784318 +0000 UTC m=+43.120144131" observedRunningTime="2025-03-17 17:43:47.076798364 +0000 UTC m=+43.486158177" watchObservedRunningTime="2025-03-17 17:43:47.07688167 +0000 UTC m=+43.486241483" Mar 17 17:43:47.202152 containerd[1511]: time="2025-03-17T17:43:47.202079169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:43:47.202575 containerd[1511]: time="2025-03-17T17:43:47.202527691Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:47.204877 containerd[1511]: time="2025-03-17T17:43:47.204827800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 493.907817ms" Mar 17 17:43:47.204877 containerd[1511]: time="2025-03-17T17:43:47.204864760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:43:47.206355 containerd[1511]: time="2025-03-17T17:43:47.206145746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:43:47.207316 containerd[1511]: time="2025-03-17T17:43:47.207282119Z" level=info msg="CreateContainer within sandbox \"bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:43:47.222773 containerd[1511]: time="2025-03-17T17:43:47.222729228Z" level=info msg="CreateContainer within sandbox \"bacc17a6464fdf3d0f53e13c402bb8b2a73a693885106cfeaec19d4a0d05df89\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"388cd0abeb1a39a8c4b5d8ca90cad2efa8818f22f00cfa47e3234413dc44da5f\"" Mar 17 17:43:47.223293 containerd[1511]: time="2025-03-17T17:43:47.223250687Z" level=info msg="StartContainer for \"388cd0abeb1a39a8c4b5d8ca90cad2efa8818f22f00cfa47e3234413dc44da5f\"" Mar 17 17:43:47.250395 systemd[1]: Started cri-containerd-388cd0abeb1a39a8c4b5d8ca90cad2efa8818f22f00cfa47e3234413dc44da5f.scope - libcontainer container 388cd0abeb1a39a8c4b5d8ca90cad2efa8818f22f00cfa47e3234413dc44da5f. Mar 17 17:43:47.293923 containerd[1511]: time="2025-03-17T17:43:47.293873349Z" level=info msg="StartContainer for \"388cd0abeb1a39a8c4b5d8ca90cad2efa8818f22f00cfa47e3234413dc44da5f\" returns successfully" Mar 17 17:43:48.085905 kubelet[2608]: I0317 17:43:48.085831 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cbddc9666-fgfrs" podStartSLOduration=21.985573223 podStartE2EDuration="29.085809911s" podCreationTimestamp="2025-03-17 17:43:19 +0000 UTC" firstStartedPulling="2025-03-17 17:43:40.105786708 +0000 UTC m=+36.515146521" lastFinishedPulling="2025-03-17 17:43:47.206023396 +0000 UTC m=+43.615383209" observedRunningTime="2025-03-17 17:43:48.085034335 +0000 UTC m=+44.494394168" watchObservedRunningTime="2025-03-17 17:43:48.085809911 +0000 UTC m=+44.495169724" Mar 17 17:43:48.814919 systemd[1]: Started sshd@10-10.0.0.14:22-10.0.0.1:52446.service - OpenSSH per-connection server daemon (10.0.0.1:52446). Mar 17 17:43:48.869129 sshd[5939]: Accepted publickey for core from 10.0.0.1 port 52446 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:43:48.873143 sshd-session[5939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:48.880917 systemd-logind[1494]: New session 11 of user core. Mar 17 17:43:48.885450 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:43:49.073063 kubelet[2608]: I0317 17:43:49.072904 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:43:49.280460 sshd[5941]: Connection closed by 10.0.0.1 port 52446 Mar 17 17:43:49.281017 sshd-session[5939]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:49.291570 systemd[1]: sshd@10-10.0.0.14:22-10.0.0.1:52446.service: Deactivated successfully. Mar 17 17:43:49.294565 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:43:49.295501 systemd-logind[1494]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:43:49.305972 systemd[1]: Started sshd@11-10.0.0.14:22-10.0.0.1:52450.service - OpenSSH per-connection server daemon (10.0.0.1:52450). Mar 17 17:43:49.307075 systemd-logind[1494]: Removed session 11. Mar 17 17:43:49.340830 containerd[1511]: time="2025-03-17T17:43:49.340672356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:49.341838 sshd[5955]: Accepted publickey for core from 10.0.0.1 port 52450 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:43:49.342220 containerd[1511]: time="2025-03-17T17:43:49.342160670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:43:49.343215 containerd[1511]: time="2025-03-17T17:43:49.343154205Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:49.343748 sshd-session[5955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:49.345652 containerd[1511]: time="2025-03-17T17:43:49.345615447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:43:49.346804 containerd[1511]: time="2025-03-17T17:43:49.346657373Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.140479126s" Mar 17 17:43:49.346804 containerd[1511]: time="2025-03-17T17:43:49.346695424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:43:49.349023 containerd[1511]: time="2025-03-17T17:43:49.348992206Z" level=info msg="CreateContainer within sandbox \"a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:43:49.349500 systemd-logind[1494]: New session 12 of user core. Mar 17 17:43:49.356398 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:43:49.416685 containerd[1511]: time="2025-03-17T17:43:49.416523953Z" level=info msg="CreateContainer within sandbox \"a09ea0fe61c0ec448dd343589a360e3f4483cec524922b4aa012048549484fe9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4e52d934ab2fc4a701dd6b413bfe5a6b93b5443f64bb641ad912cb42c946dcc0\"" Mar 17 17:43:49.417413 containerd[1511]: time="2025-03-17T17:43:49.417380872Z" level=info msg="StartContainer for \"4e52d934ab2fc4a701dd6b413bfe5a6b93b5443f64bb641ad912cb42c946dcc0\"" Mar 17 17:43:49.448763 systemd[1]: run-containerd-runc-k8s.io-4e52d934ab2fc4a701dd6b413bfe5a6b93b5443f64bb641ad912cb42c946dcc0-runc.j4BuW8.mount: Deactivated successfully. Mar 17 17:43:49.460506 systemd[1]: Started cri-containerd-4e52d934ab2fc4a701dd6b413bfe5a6b93b5443f64bb641ad912cb42c946dcc0.scope - libcontainer container 4e52d934ab2fc4a701dd6b413bfe5a6b93b5443f64bb641ad912cb42c946dcc0. Mar 17 17:43:49.601886 containerd[1511]: time="2025-03-17T17:43:49.601734563Z" level=info msg="StartContainer for \"4e52d934ab2fc4a701dd6b413bfe5a6b93b5443f64bb641ad912cb42c946dcc0\" returns successfully" Mar 17 17:43:49.634531 sshd[5959]: Connection closed by 10.0.0.1 port 52450 Mar 17 17:43:49.634978 sshd-session[5955]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:49.644253 systemd[1]: sshd@11-10.0.0.14:22-10.0.0.1:52450.service: Deactivated successfully. Mar 17 17:43:49.646532 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:43:49.648372 systemd-logind[1494]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:43:49.654703 systemd[1]: Started sshd@12-10.0.0.14:22-10.0.0.1:52460.service - OpenSSH per-connection server daemon (10.0.0.1:52460). Mar 17 17:43:49.656447 systemd-logind[1494]: Removed session 12. Mar 17 17:43:49.697267 sshd[6006]: Accepted publickey for core from 10.0.0.1 port 52460 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:43:49.698966 sshd-session[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:49.703975 systemd-logind[1494]: New session 13 of user core. Mar 17 17:43:49.713377 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:43:49.800269 kubelet[2608]: I0317 17:43:49.800207 2608 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:43:49.800269 kubelet[2608]: I0317 17:43:49.800284 2608 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:43:49.856591 sshd[6009]: Connection closed by 10.0.0.1 port 52460 Mar 17 17:43:49.856754 sshd-session[6006]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:49.861831 systemd-logind[1494]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:43:49.865183 systemd[1]: sshd@12-10.0.0.14:22-10.0.0.1:52460.service: Deactivated successfully. Mar 17 17:43:49.867697 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:43:49.869073 systemd-logind[1494]: Removed session 13. Mar 17 17:43:50.090202 kubelet[2608]: I0317 17:43:50.090012 2608 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-j8ss7" podStartSLOduration=21.351036425 podStartE2EDuration="31.089989232s" podCreationTimestamp="2025-03-17 17:43:19 +0000 UTC" firstStartedPulling="2025-03-17 17:43:39.608583486 +0000 UTC m=+36.017943299" lastFinishedPulling="2025-03-17 17:43:49.347536293 +0000 UTC m=+45.756896106" observedRunningTime="2025-03-17 17:43:50.088714389 +0000 UTC m=+46.498074212" watchObservedRunningTime="2025-03-17 17:43:50.089989232 +0000 UTC m=+46.499349045" Mar 17 17:43:54.868934 systemd[1]: Started sshd@13-10.0.0.14:22-10.0.0.1:39424.service - OpenSSH per-connection server daemon (10.0.0.1:39424). Mar 17 17:43:54.921262 sshd[6045]: Accepted publickey for core from 10.0.0.1 port 39424 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:43:54.923206 sshd-session[6045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:54.927998 systemd-logind[1494]: New session 14 of user core. Mar 17 17:43:54.937421 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:43:55.059673 sshd[6047]: Connection closed by 10.0.0.1 port 39424 Mar 17 17:43:55.060093 sshd-session[6045]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:55.065884 systemd[1]: sshd@13-10.0.0.14:22-10.0.0.1:39424.service: Deactivated successfully. Mar 17 17:43:55.068340 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:43:55.069205 systemd-logind[1494]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:43:55.070267 systemd-logind[1494]: Removed session 14. Mar 17 17:44:00.073658 systemd[1]: Started sshd@14-10.0.0.14:22-10.0.0.1:39436.service - OpenSSH per-connection server daemon (10.0.0.1:39436). Mar 17 17:44:00.338752 sshd[6081]: Accepted publickey for core from 10.0.0.1 port 39436 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:00.340573 sshd-session[6081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:00.345412 systemd-logind[1494]: New session 15 of user core. Mar 17 17:44:00.359447 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:44:00.491105 sshd[6083]: Connection closed by 10.0.0.1 port 39436 Mar 17 17:44:00.491563 sshd-session[6081]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:00.504331 systemd[1]: sshd@14-10.0.0.14:22-10.0.0.1:39436.service: Deactivated successfully. Mar 17 17:44:00.506657 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:44:00.508596 systemd-logind[1494]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:44:00.516529 systemd[1]: Started sshd@15-10.0.0.14:22-10.0.0.1:39446.service - OpenSSH per-connection server daemon (10.0.0.1:39446). Mar 17 17:44:00.517574 systemd-logind[1494]: Removed session 15. Mar 17 17:44:00.554806 sshd[6096]: Accepted publickey for core from 10.0.0.1 port 39446 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:00.556462 sshd-session[6096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:00.561763 systemd-logind[1494]: New session 16 of user core. Mar 17 17:44:00.571407 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:44:01.004396 sshd[6099]: Connection closed by 10.0.0.1 port 39446 Mar 17 17:44:01.004986 sshd-session[6096]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:01.020145 systemd[1]: sshd@15-10.0.0.14:22-10.0.0.1:39446.service: Deactivated successfully. Mar 17 17:44:01.022155 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:44:01.022981 systemd-logind[1494]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:44:01.029582 systemd[1]: Started sshd@16-10.0.0.14:22-10.0.0.1:39458.service - OpenSSH per-connection server daemon (10.0.0.1:39458). Mar 17 17:44:01.030418 systemd-logind[1494]: Removed session 16. Mar 17 17:44:01.072348 sshd[6110]: Accepted publickey for core from 10.0.0.1 port 39458 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:01.074333 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:01.079646 systemd-logind[1494]: New session 17 of user core. Mar 17 17:44:01.089403 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:44:02.681312 sshd[6113]: Connection closed by 10.0.0.1 port 39458 Mar 17 17:44:02.681917 sshd-session[6110]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:02.698173 systemd[1]: Started sshd@17-10.0.0.14:22-10.0.0.1:39472.service - OpenSSH per-connection server daemon (10.0.0.1:39472). Mar 17 17:44:02.699153 systemd[1]: sshd@16-10.0.0.14:22-10.0.0.1:39458.service: Deactivated successfully. Mar 17 17:44:02.701846 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:44:02.702171 systemd[1]: session-17.scope: Consumed 571ms CPU time, 65.6M memory peak. Mar 17 17:44:02.705405 systemd-logind[1494]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:44:02.713949 systemd-logind[1494]: Removed session 17. Mar 17 17:44:02.751195 sshd[6130]: Accepted publickey for core from 10.0.0.1 port 39472 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:02.752846 sshd-session[6130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:02.757924 systemd-logind[1494]: New session 18 of user core. Mar 17 17:44:02.768446 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:44:03.047810 sshd[6136]: Connection closed by 10.0.0.1 port 39472 Mar 17 17:44:03.049653 sshd-session[6130]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:03.060472 systemd[1]: sshd@17-10.0.0.14:22-10.0.0.1:39472.service: Deactivated successfully. Mar 17 17:44:03.062696 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:44:03.063727 systemd-logind[1494]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:44:03.076639 systemd[1]: Started sshd@18-10.0.0.14:22-10.0.0.1:39488.service - OpenSSH per-connection server daemon (10.0.0.1:39488). Mar 17 17:44:03.077447 systemd-logind[1494]: Removed session 18. Mar 17 17:44:03.115952 sshd[6147]: Accepted publickey for core from 10.0.0.1 port 39488 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:03.117899 sshd-session[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:03.122966 systemd-logind[1494]: New session 19 of user core. Mar 17 17:44:03.129473 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:44:03.252076 sshd[6150]: Connection closed by 10.0.0.1 port 39488 Mar 17 17:44:03.252481 sshd-session[6147]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:03.255579 systemd[1]: sshd@18-10.0.0.14:22-10.0.0.1:39488.service: Deactivated successfully. Mar 17 17:44:03.257951 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:44:03.259910 systemd-logind[1494]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:44:03.261150 systemd-logind[1494]: Removed session 19. Mar 17 17:44:03.666735 containerd[1511]: time="2025-03-17T17:44:03.666541357Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:44:03.666735 containerd[1511]: time="2025-03-17T17:44:03.666666863Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:44:03.666735 containerd[1511]: time="2025-03-17T17:44:03.666680339Z" level=info msg="StopPodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:44:03.672788 containerd[1511]: time="2025-03-17T17:44:03.672730682Z" level=info msg="RemovePodSandbox for \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:44:03.684910 containerd[1511]: time="2025-03-17T17:44:03.684859312Z" level=info msg="Forcibly stopping sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\"" Mar 17 17:44:03.685058 containerd[1511]: time="2025-03-17T17:44:03.684978967Z" level=info msg="TearDown network for sandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" successfully" Mar 17 17:44:03.910709 containerd[1511]: time="2025-03-17T17:44:03.910634964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:03.910854 containerd[1511]: time="2025-03-17T17:44:03.910762645Z" level=info msg="RemovePodSandbox \"a76d3257dd8be43a7cc3bd31473414beabbebb92e09df84988345f517c7acdb6\" returns successfully" Mar 17 17:44:03.911475 containerd[1511]: time="2025-03-17T17:44:03.911424833Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:44:03.911657 containerd[1511]: time="2025-03-17T17:44:03.911547262Z" level=info msg="TearDown network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" successfully" Mar 17 17:44:03.911657 containerd[1511]: time="2025-03-17T17:44:03.911557763Z" level=info msg="StopPodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" returns successfully" Mar 17 17:44:03.912048 containerd[1511]: time="2025-03-17T17:44:03.912009233Z" level=info msg="RemovePodSandbox for \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:44:03.912127 containerd[1511]: time="2025-03-17T17:44:03.912050360Z" level=info msg="Forcibly stopping sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\"" Mar 17 17:44:03.912202 containerd[1511]: time="2025-03-17T17:44:03.912158594Z" level=info msg="TearDown network for sandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" successfully" Mar 17 17:44:03.974183 containerd[1511]: time="2025-03-17T17:44:03.974040445Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:03.974183 containerd[1511]: time="2025-03-17T17:44:03.974119013Z" level=info msg="RemovePodSandbox \"2cb67c065fae8b5c1bdba86659bed6455408568df13f798feb182fef9902a6fd\" returns successfully" Mar 17 17:44:03.974651 containerd[1511]: time="2025-03-17T17:44:03.974627560Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" Mar 17 17:44:03.975084 containerd[1511]: time="2025-03-17T17:44:03.974893130Z" level=info msg="TearDown network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" successfully" Mar 17 17:44:03.975084 containerd[1511]: time="2025-03-17T17:44:03.974912898Z" level=info msg="StopPodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" returns successfully" Mar 17 17:44:03.975497 containerd[1511]: time="2025-03-17T17:44:03.975442786Z" level=info msg="RemovePodSandbox for \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" Mar 17 17:44:03.975552 containerd[1511]: time="2025-03-17T17:44:03.975496236Z" level=info msg="Forcibly stopping sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\"" Mar 17 17:44:03.975666 containerd[1511]: time="2025-03-17T17:44:03.975618226Z" level=info msg="TearDown network for sandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" successfully" Mar 17 17:44:04.064334 containerd[1511]: time="2025-03-17T17:44:04.064275362Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.064478 containerd[1511]: time="2025-03-17T17:44:04.064360137Z" level=info msg="RemovePodSandbox \"af73a63aa645f1d51633cc194edb5fce001c89b5bb20b391b65014c31e4f0c4f\" returns successfully" Mar 17 17:44:04.064897 containerd[1511]: time="2025-03-17T17:44:04.064864825Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\"" Mar 17 17:44:04.065053 containerd[1511]: time="2025-03-17T17:44:04.064991280Z" level=info msg="TearDown network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" successfully" Mar 17 17:44:04.065053 containerd[1511]: time="2025-03-17T17:44:04.065044973Z" level=info msg="StopPodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" returns successfully" Mar 17 17:44:04.065365 containerd[1511]: time="2025-03-17T17:44:04.065343252Z" level=info msg="RemovePodSandbox for \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\"" Mar 17 17:44:04.065414 containerd[1511]: time="2025-03-17T17:44:04.065372868Z" level=info msg="Forcibly stopping sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\"" Mar 17 17:44:04.065509 containerd[1511]: time="2025-03-17T17:44:04.065470608Z" level=info msg="TearDown network for sandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" successfully" Mar 17 17:44:04.164343 containerd[1511]: time="2025-03-17T17:44:04.164262338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.164497 containerd[1511]: time="2025-03-17T17:44:04.164361249Z" level=info msg="RemovePodSandbox \"d6067ee3ef3f1c8458c7806e1fd60848edb9e6130cf9c3a2b62cb333064bae52\" returns successfully" Mar 17 17:44:04.164835 containerd[1511]: time="2025-03-17T17:44:04.164793737Z" level=info msg="StopPodSandbox for \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\"" Mar 17 17:44:04.164983 containerd[1511]: time="2025-03-17T17:44:04.164952485Z" level=info msg="TearDown network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" successfully" Mar 17 17:44:04.164983 containerd[1511]: time="2025-03-17T17:44:04.164977393Z" level=info msg="StopPodSandbox for \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" returns successfully" Mar 17 17:44:04.166309 containerd[1511]: time="2025-03-17T17:44:04.165345165Z" level=info msg="RemovePodSandbox for \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\"" Mar 17 17:44:04.166309 containerd[1511]: time="2025-03-17T17:44:04.165382358Z" level=info msg="Forcibly stopping sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\"" Mar 17 17:44:04.166309 containerd[1511]: time="2025-03-17T17:44:04.165471080Z" level=info msg="TearDown network for sandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" successfully" Mar 17 17:44:04.268754 containerd[1511]: time="2025-03-17T17:44:04.268628496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.268754 containerd[1511]: time="2025-03-17T17:44:04.268710856Z" level=info msg="RemovePodSandbox \"2b9c986ecac5a820435cbfa9e312ad5f5736fd10e27620edeccecc76eb005ab1\" returns successfully" Mar 17 17:44:04.269171 containerd[1511]: time="2025-03-17T17:44:04.269144615Z" level=info msg="StopPodSandbox for \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\"" Mar 17 17:44:04.269376 containerd[1511]: time="2025-03-17T17:44:04.269322370Z" level=info msg="TearDown network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\" successfully" Mar 17 17:44:04.269376 containerd[1511]: time="2025-03-17T17:44:04.269340836Z" level=info msg="StopPodSandbox for \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\" returns successfully" Mar 17 17:44:04.269739 containerd[1511]: time="2025-03-17T17:44:04.269687848Z" level=info msg="RemovePodSandbox for \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\"" Mar 17 17:44:04.269739 containerd[1511]: time="2025-03-17T17:44:04.269717185Z" level=info msg="Forcibly stopping sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\"" Mar 17 17:44:04.269947 containerd[1511]: time="2025-03-17T17:44:04.269793784Z" level=info msg="TearDown network for sandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\" successfully" Mar 17 17:44:04.368363 containerd[1511]: time="2025-03-17T17:44:04.368293017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.368510 containerd[1511]: time="2025-03-17T17:44:04.368384905Z" level=info msg="RemovePodSandbox \"a625cb55d3f0798411969a54cfa3b06acd6337b71463192ab198ee24f2b40587\" returns successfully" Mar 17 17:44:04.368999 containerd[1511]: time="2025-03-17T17:44:04.368829005Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:44:04.368999 containerd[1511]: time="2025-03-17T17:44:04.368943867Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:44:04.368999 containerd[1511]: time="2025-03-17T17:44:04.368954187Z" level=info msg="StopPodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:44:04.369263 containerd[1511]: time="2025-03-17T17:44:04.369213720Z" level=info msg="RemovePodSandbox for \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:44:04.369308 containerd[1511]: time="2025-03-17T17:44:04.369233759Z" level=info msg="Forcibly stopping sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\"" Mar 17 17:44:04.369387 containerd[1511]: time="2025-03-17T17:44:04.369347990Z" level=info msg="TearDown network for sandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" successfully" Mar 17 17:44:04.491639 containerd[1511]: time="2025-03-17T17:44:04.491571551Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.491910 containerd[1511]: time="2025-03-17T17:44:04.491653098Z" level=info msg="RemovePodSandbox \"b3416c553b8ee2ddd7115ce7b04c86d519dee7b4f6a9ee2e1a0c06779a132290\" returns successfully" Mar 17 17:44:04.492166 containerd[1511]: time="2025-03-17T17:44:04.492132386Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:44:04.492339 containerd[1511]: time="2025-03-17T17:44:04.492303388Z" level=info msg="TearDown network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" successfully" Mar 17 17:44:04.492339 containerd[1511]: time="2025-03-17T17:44:04.492322715Z" level=info msg="StopPodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" returns successfully" Mar 17 17:44:04.492601 containerd[1511]: time="2025-03-17T17:44:04.492575966Z" level=info msg="RemovePodSandbox for \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:44:04.492601 containerd[1511]: time="2025-03-17T17:44:04.492598079Z" level=info msg="Forcibly stopping sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\"" Mar 17 17:44:04.492706 containerd[1511]: time="2025-03-17T17:44:04.492669848Z" level=info msg="TearDown network for sandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" successfully" Mar 17 17:44:04.546164 containerd[1511]: time="2025-03-17T17:44:04.546011506Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.546164 containerd[1511]: time="2025-03-17T17:44:04.546091000Z" level=info msg="RemovePodSandbox \"b9037660650fef2882c30e4a390d90ea5d95ff351ca1fe4fef4c05e1892a40bf\" returns successfully" Mar 17 17:44:04.546729 containerd[1511]: time="2025-03-17T17:44:04.546659260Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" Mar 17 17:44:04.546802 containerd[1511]: time="2025-03-17T17:44:04.546784443Z" level=info msg="TearDown network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" successfully" Mar 17 17:44:04.546836 containerd[1511]: time="2025-03-17T17:44:04.546799793Z" level=info msg="StopPodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" returns successfully" Mar 17 17:44:04.547102 containerd[1511]: time="2025-03-17T17:44:04.547077030Z" level=info msg="RemovePodSandbox for \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" Mar 17 17:44:04.547140 containerd[1511]: time="2025-03-17T17:44:04.547107369Z" level=info msg="Forcibly stopping sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\"" Mar 17 17:44:04.547275 containerd[1511]: time="2025-03-17T17:44:04.547195981Z" level=info msg="TearDown network for sandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" successfully" Mar 17 17:44:04.658489 containerd[1511]: time="2025-03-17T17:44:04.658399974Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.658489 containerd[1511]: time="2025-03-17T17:44:04.658478586Z" level=info msg="RemovePodSandbox \"26ba0aa2321805999e40ba2fc413b837bfc85df793e284ed1b86c369f534a215\" returns successfully" Mar 17 17:44:04.658999 containerd[1511]: time="2025-03-17T17:44:04.658976301Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\"" Mar 17 17:44:04.659171 containerd[1511]: time="2025-03-17T17:44:04.659128495Z" level=info msg="TearDown network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" successfully" Mar 17 17:44:04.659199 containerd[1511]: time="2025-03-17T17:44:04.659166850Z" level=info msg="StopPodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" returns successfully" Mar 17 17:44:04.659542 containerd[1511]: time="2025-03-17T17:44:04.659513301Z" level=info msg="RemovePodSandbox for \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\"" Mar 17 17:44:04.659613 containerd[1511]: time="2025-03-17T17:44:04.659536076Z" level=info msg="Forcibly stopping sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\"" Mar 17 17:44:04.659651 containerd[1511]: time="2025-03-17T17:44:04.659639926Z" level=info msg="TearDown network for sandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" successfully" Mar 17 17:44:04.770490 containerd[1511]: time="2025-03-17T17:44:04.770411132Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.771025 containerd[1511]: time="2025-03-17T17:44:04.770503891Z" level=info msg="RemovePodSandbox \"4995d670e3f4ae5f83d2ae9fdfeb42a7c2819afc980a6f20207b35f48e319200\" returns successfully" Mar 17 17:44:04.771025 containerd[1511]: time="2025-03-17T17:44:04.770990684Z" level=info msg="StopPodSandbox for \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\"" Mar 17 17:44:04.771141 containerd[1511]: time="2025-03-17T17:44:04.771117630Z" level=info msg="TearDown network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" successfully" Mar 17 17:44:04.771141 containerd[1511]: time="2025-03-17T17:44:04.771137249Z" level=info msg="StopPodSandbox for \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" returns successfully" Mar 17 17:44:04.771402 containerd[1511]: time="2025-03-17T17:44:04.771377343Z" level=info msg="RemovePodSandbox for \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\"" Mar 17 17:44:04.771470 containerd[1511]: time="2025-03-17T17:44:04.771403103Z" level=info msg="Forcibly stopping sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\"" Mar 17 17:44:04.771538 containerd[1511]: time="2025-03-17T17:44:04.771485122Z" level=info msg="TearDown network for sandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" successfully" Mar 17 17:44:04.902185 containerd[1511]: time="2025-03-17T17:44:04.902008098Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:04.902185 containerd[1511]: time="2025-03-17T17:44:04.902098914Z" level=info msg="RemovePodSandbox \"3c32fc94639c6ccbeaf0ea399aeb844a6b759e7294e28e5ddf561deaf656ac0a\" returns successfully" Mar 17 17:44:04.902675 containerd[1511]: time="2025-03-17T17:44:04.902652486Z" level=info msg="StopPodSandbox for \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\"" Mar 17 17:44:04.902908 containerd[1511]: time="2025-03-17T17:44:04.902780244Z" level=info msg="TearDown network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\" successfully" Mar 17 17:44:04.902908 containerd[1511]: time="2025-03-17T17:44:04.902901218Z" level=info msg="StopPodSandbox for \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\" returns successfully" Mar 17 17:44:04.903292 containerd[1511]: time="2025-03-17T17:44:04.903234744Z" level=info msg="RemovePodSandbox for \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\"" Mar 17 17:44:04.903357 containerd[1511]: time="2025-03-17T17:44:04.903292707Z" level=info msg="Forcibly stopping sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\"" Mar 17 17:44:04.903419 containerd[1511]: time="2025-03-17T17:44:04.903380366Z" level=info msg="TearDown network for sandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\" successfully" Mar 17 17:44:05.034015 containerd[1511]: time="2025-03-17T17:44:05.033938774Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.034015 containerd[1511]: time="2025-03-17T17:44:05.034018078Z" level=info msg="RemovePodSandbox \"73bb621c1b0eab504f437f6a4408c5e08f9fdfca9359426c070502f43d2e111f\" returns successfully" Mar 17 17:44:05.034661 containerd[1511]: time="2025-03-17T17:44:05.034634420Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:44:05.034899 containerd[1511]: time="2025-03-17T17:44:05.034870397Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:44:05.034899 containerd[1511]: time="2025-03-17T17:44:05.034893582Z" level=info msg="StopPodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:44:05.035187 containerd[1511]: time="2025-03-17T17:44:05.035161511Z" level=info msg="RemovePodSandbox for \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:44:05.035187 containerd[1511]: time="2025-03-17T17:44:05.035187711Z" level=info msg="Forcibly stopping sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\"" Mar 17 17:44:05.035354 containerd[1511]: time="2025-03-17T17:44:05.035303505Z" level=info msg="TearDown network for sandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" successfully" Mar 17 17:44:05.040380 containerd[1511]: time="2025-03-17T17:44:05.040333861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.040473 containerd[1511]: time="2025-03-17T17:44:05.040394148Z" level=info msg="RemovePodSandbox \"22d2b9a804aff601d7932c4460691afa93c66169fac5c4cfe51532162fb0b3e0\" returns successfully" Mar 17 17:44:05.040951 containerd[1511]: time="2025-03-17T17:44:05.040774293Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:44:05.040951 containerd[1511]: time="2025-03-17T17:44:05.040881872Z" level=info msg="TearDown network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" successfully" Mar 17 17:44:05.040951 containerd[1511]: time="2025-03-17T17:44:05.040894717Z" level=info msg="StopPodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" returns successfully" Mar 17 17:44:05.041192 containerd[1511]: time="2025-03-17T17:44:05.041167555Z" level=info msg="RemovePodSandbox for \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:44:05.041273 containerd[1511]: time="2025-03-17T17:44:05.041193816Z" level=info msg="Forcibly stopping sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\"" Mar 17 17:44:05.041350 containerd[1511]: time="2025-03-17T17:44:05.041307084Z" level=info msg="TearDown network for sandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" successfully" Mar 17 17:44:05.045816 containerd[1511]: time="2025-03-17T17:44:05.045785773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.045924 containerd[1511]: time="2025-03-17T17:44:05.045832203Z" level=info msg="RemovePodSandbox \"c727faf86979f098f3854e7a292376485b8fdcffd8e4a361726d0eb4ed6c4541\" returns successfully" Mar 17 17:44:05.046168 containerd[1511]: time="2025-03-17T17:44:05.046143595Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" Mar 17 17:44:05.046283 containerd[1511]: time="2025-03-17T17:44:05.046234311Z" level=info msg="TearDown network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" successfully" Mar 17 17:44:05.046283 containerd[1511]: time="2025-03-17T17:44:05.046273667Z" level=info msg="StopPodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" returns successfully" Mar 17 17:44:05.046573 containerd[1511]: time="2025-03-17T17:44:05.046546536Z" level=info msg="RemovePodSandbox for \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" Mar 17 17:44:05.046626 containerd[1511]: time="2025-03-17T17:44:05.046570471Z" level=info msg="Forcibly stopping sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\"" Mar 17 17:44:05.046704 containerd[1511]: time="2025-03-17T17:44:05.046678882Z" level=info msg="TearDown network for sandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" successfully" Mar 17 17:44:05.050806 containerd[1511]: time="2025-03-17T17:44:05.050772695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.050913 containerd[1511]: time="2025-03-17T17:44:05.050824765Z" level=info msg="RemovePodSandbox \"9247361ac2ade69977479b8eda935f31e0c380b93fd00fee4265ac9e0a0ed408\" returns successfully" Mar 17 17:44:05.051233 containerd[1511]: time="2025-03-17T17:44:05.051205913Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\"" Mar 17 17:44:05.051365 containerd[1511]: time="2025-03-17T17:44:05.051340944Z" level=info msg="TearDown network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" successfully" Mar 17 17:44:05.051365 containerd[1511]: time="2025-03-17T17:44:05.051361323Z" level=info msg="StopPodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" returns successfully" Mar 17 17:44:05.051668 containerd[1511]: time="2025-03-17T17:44:05.051642788Z" level=info msg="RemovePodSandbox for \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\"" Mar 17 17:44:05.051737 containerd[1511]: time="2025-03-17T17:44:05.051674500Z" level=info msg="Forcibly stopping sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\"" Mar 17 17:44:05.051799 containerd[1511]: time="2025-03-17T17:44:05.051755517Z" level=info msg="TearDown network for sandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" successfully" Mar 17 17:44:05.059336 containerd[1511]: time="2025-03-17T17:44:05.059230253Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.059336 containerd[1511]: time="2025-03-17T17:44:05.059331980Z" level=info msg="RemovePodSandbox \"a0631bf50bf3a143732dff262d9b57d7825ff44db988989892670e8676df101b\" returns successfully" Mar 17 17:44:05.059861 containerd[1511]: time="2025-03-17T17:44:05.059801328Z" level=info msg="StopPodSandbox for \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\"" Mar 17 17:44:05.059948 containerd[1511]: time="2025-03-17T17:44:05.059919166Z" level=info msg="TearDown network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" successfully" Mar 17 17:44:05.059948 containerd[1511]: time="2025-03-17T17:44:05.059934176Z" level=info msg="StopPodSandbox for \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" returns successfully" Mar 17 17:44:05.060220 containerd[1511]: time="2025-03-17T17:44:05.060195080Z" level=info msg="RemovePodSandbox for \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\"" Mar 17 17:44:05.060316 containerd[1511]: time="2025-03-17T17:44:05.060219478Z" level=info msg="Forcibly stopping sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\"" Mar 17 17:44:05.060381 containerd[1511]: time="2025-03-17T17:44:05.060335172Z" level=info msg="TearDown network for sandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" successfully" Mar 17 17:44:05.064791 containerd[1511]: time="2025-03-17T17:44:05.064749585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.064914 containerd[1511]: time="2025-03-17T17:44:05.064810483Z" level=info msg="RemovePodSandbox \"46d27bee2aac9f8ea5afa69e476f1d7708c4be9375e81449399c6da9d57409f0\" returns successfully" Mar 17 17:44:05.065196 containerd[1511]: time="2025-03-17T17:44:05.065170069Z" level=info msg="StopPodSandbox for \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\"" Mar 17 17:44:05.065330 containerd[1511]: time="2025-03-17T17:44:05.065307785Z" level=info msg="TearDown network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\" successfully" Mar 17 17:44:05.065330 containerd[1511]: time="2025-03-17T17:44:05.065326421Z" level=info msg="StopPodSandbox for \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\" returns successfully" Mar 17 17:44:05.065604 containerd[1511]: time="2025-03-17T17:44:05.065571155Z" level=info msg="RemovePodSandbox for \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\"" Mar 17 17:44:05.065604 containerd[1511]: time="2025-03-17T17:44:05.065598417Z" level=info msg="Forcibly stopping sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\"" Mar 17 17:44:05.065726 containerd[1511]: time="2025-03-17T17:44:05.065685106Z" level=info msg="TearDown network for sandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\" successfully" Mar 17 17:44:05.070143 containerd[1511]: time="2025-03-17T17:44:05.070100521Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.070143 containerd[1511]: time="2025-03-17T17:44:05.070138644Z" level=info msg="RemovePodSandbox \"03f1b883f3ce8c49936e808421e7df92873d621d70bfc03a9ce94e5d0180a08c\" returns successfully" Mar 17 17:44:05.070497 containerd[1511]: time="2025-03-17T17:44:05.070470727Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:44:05.070603 containerd[1511]: time="2025-03-17T17:44:05.070570180Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:44:05.070603 containerd[1511]: time="2025-03-17T17:44:05.070589127Z" level=info msg="StopPodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:44:05.070858 containerd[1511]: time="2025-03-17T17:44:05.070827248Z" level=info msg="RemovePodSandbox for \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:44:05.070904 containerd[1511]: time="2025-03-17T17:44:05.070857336Z" level=info msg="Forcibly stopping sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\"" Mar 17 17:44:05.070981 containerd[1511]: time="2025-03-17T17:44:05.070938934Z" level=info msg="TearDown network for sandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" successfully" Mar 17 17:44:05.075336 containerd[1511]: time="2025-03-17T17:44:05.075290986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.075876 containerd[1511]: time="2025-03-17T17:44:05.075347146Z" level=info msg="RemovePodSandbox \"605a348218e0107d12ff4cea59eb6ff512155f4808ce20dbf8845e1c2ca94a38\" returns successfully" Mar 17 17:44:05.075876 containerd[1511]: time="2025-03-17T17:44:05.075639120Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:44:05.075876 containerd[1511]: time="2025-03-17T17:44:05.075743593Z" level=info msg="TearDown network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" successfully" Mar 17 17:44:05.075876 containerd[1511]: time="2025-03-17T17:44:05.075757399Z" level=info msg="StopPodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" returns successfully" Mar 17 17:44:05.080095 containerd[1511]: time="2025-03-17T17:44:05.076029176Z" level=info msg="RemovePodSandbox for \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:44:05.080095 containerd[1511]: time="2025-03-17T17:44:05.076048222Z" level=info msg="Forcibly stopping sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\"" Mar 17 17:44:05.080095 containerd[1511]: time="2025-03-17T17:44:05.076128157Z" level=info msg="TearDown network for sandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" successfully" Mar 17 17:44:05.083385 containerd[1511]: time="2025-03-17T17:44:05.083339113Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.083448 containerd[1511]: time="2025-03-17T17:44:05.083405261Z" level=info msg="RemovePodSandbox \"5e63d78a3d354e79550c8cdff0a8e91e3e009993d305083208f0e5ce79b3b5d3\" returns successfully" Mar 17 17:44:05.083770 containerd[1511]: time="2025-03-17T17:44:05.083747373Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" Mar 17 17:44:05.083852 containerd[1511]: time="2025-03-17T17:44:05.083836736Z" level=info msg="TearDown network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" successfully" Mar 17 17:44:05.083887 containerd[1511]: time="2025-03-17T17:44:05.083850663Z" level=info msg="StopPodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" returns successfully" Mar 17 17:44:05.084114 containerd[1511]: time="2025-03-17T17:44:05.084094736Z" level=info msg="RemovePodSandbox for \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" Mar 17 17:44:05.084114 containerd[1511]: time="2025-03-17T17:44:05.084113542Z" level=info msg="Forcibly stopping sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\"" Mar 17 17:44:05.084231 containerd[1511]: time="2025-03-17T17:44:05.084177556Z" level=info msg="TearDown network for sandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" successfully" Mar 17 17:44:05.088693 containerd[1511]: time="2025-03-17T17:44:05.088643068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.088693 containerd[1511]: time="2025-03-17T17:44:05.088687153Z" level=info msg="RemovePodSandbox \"3bd9d52ffb21c34bbcd23a5be2a60801430f0a97bb4c73a345aebc960a93ff4a\" returns successfully" Mar 17 17:44:05.089118 containerd[1511]: time="2025-03-17T17:44:05.088949501Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\"" Mar 17 17:44:05.089118 containerd[1511]: time="2025-03-17T17:44:05.089055286Z" level=info msg="TearDown network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" successfully" Mar 17 17:44:05.089118 containerd[1511]: time="2025-03-17T17:44:05.089067580Z" level=info msg="StopPodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" returns successfully" Mar 17 17:44:05.089351 containerd[1511]: time="2025-03-17T17:44:05.089327724Z" level=info msg="RemovePodSandbox for \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\"" Mar 17 17:44:05.089418 containerd[1511]: time="2025-03-17T17:44:05.089353413Z" level=info msg="Forcibly stopping sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\"" Mar 17 17:44:05.089476 containerd[1511]: time="2025-03-17T17:44:05.089437295Z" level=info msg="TearDown network for sandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" successfully" Mar 17 17:44:05.093379 containerd[1511]: time="2025-03-17T17:44:05.093353154Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.093462 containerd[1511]: time="2025-03-17T17:44:05.093392390Z" level=info msg="RemovePodSandbox \"f76b26f678734a68282a2eaf69c606018f2f0eab395982a5f7f178d61349920e\" returns successfully" Mar 17 17:44:05.093664 containerd[1511]: time="2025-03-17T17:44:05.093639108Z" level=info msg="StopPodSandbox for \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\"" Mar 17 17:44:05.093766 containerd[1511]: time="2025-03-17T17:44:05.093743419Z" level=info msg="TearDown network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" successfully" Mar 17 17:44:05.093766 containerd[1511]: time="2025-03-17T17:44:05.093761875Z" level=info msg="StopPodSandbox for \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" returns successfully" Mar 17 17:44:05.094016 containerd[1511]: time="2025-03-17T17:44:05.093993463Z" level=info msg="RemovePodSandbox for \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\"" Mar 17 17:44:05.094084 containerd[1511]: time="2025-03-17T17:44:05.094017430Z" level=info msg="Forcibly stopping sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\"" Mar 17 17:44:05.094131 containerd[1511]: time="2025-03-17T17:44:05.094098768Z" level=info msg="TearDown network for sandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" successfully" Mar 17 17:44:05.097858 containerd[1511]: time="2025-03-17T17:44:05.097825109Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.097948 containerd[1511]: time="2025-03-17T17:44:05.097862812Z" level=info msg="RemovePodSandbox \"13120ec95c9d550030350e8d113fa1530bf320f5057a27110b89fb5064c968f0\" returns successfully" Mar 17 17:44:05.098156 containerd[1511]: time="2025-03-17T17:44:05.098124859Z" level=info msg="StopPodSandbox for \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\"" Mar 17 17:44:05.098271 containerd[1511]: time="2025-03-17T17:44:05.098227157Z" level=info msg="TearDown network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\" successfully" Mar 17 17:44:05.098328 containerd[1511]: time="2025-03-17T17:44:05.098284748Z" level=info msg="StopPodSandbox for \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\" returns successfully" Mar 17 17:44:05.098529 containerd[1511]: time="2025-03-17T17:44:05.098496148Z" level=info msg="RemovePodSandbox for \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\"" Mar 17 17:44:05.098529 containerd[1511]: time="2025-03-17T17:44:05.098524743Z" level=info msg="Forcibly stopping sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\"" Mar 17 17:44:05.098649 containerd[1511]: time="2025-03-17T17:44:05.098608145Z" level=info msg="TearDown network for sandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\" successfully" Mar 17 17:44:05.103145 containerd[1511]: time="2025-03-17T17:44:05.103071272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.103224 containerd[1511]: time="2025-03-17T17:44:05.103181125Z" level=info msg="RemovePodSandbox \"b354ba4755a488c2027d000cfd3bd6e154695e02f9e7e99277c2a943c716e919\" returns successfully" Mar 17 17:44:05.103651 containerd[1511]: time="2025-03-17T17:44:05.103625275Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:44:05.103766 containerd[1511]: time="2025-03-17T17:44:05.103743554Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:44:05.103822 containerd[1511]: time="2025-03-17T17:44:05.103765566Z" level=info msg="StopPodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:44:05.104175 containerd[1511]: time="2025-03-17T17:44:05.104154819Z" level=info msg="RemovePodSandbox for \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:44:05.104214 containerd[1511]: time="2025-03-17T17:44:05.104177794Z" level=info msg="Forcibly stopping sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\"" Mar 17 17:44:05.104306 containerd[1511]: time="2025-03-17T17:44:05.104277367Z" level=info msg="TearDown network for sandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" successfully" Mar 17 17:44:05.108188 containerd[1511]: time="2025-03-17T17:44:05.108150333Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.108257 containerd[1511]: time="2025-03-17T17:44:05.108208154Z" level=info msg="RemovePodSandbox \"c0357f0524099885f06d4a7418831e7579b998c419311c26aa34d8779c42dc52\" returns successfully" Mar 17 17:44:05.108553 containerd[1511]: time="2025-03-17T17:44:05.108517744Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:44:05.108711 containerd[1511]: time="2025-03-17T17:44:05.108604331Z" level=info msg="TearDown network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" successfully" Mar 17 17:44:05.108711 containerd[1511]: time="2025-03-17T17:44:05.108614461Z" level=info msg="StopPodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" returns successfully" Mar 17 17:44:05.108834 containerd[1511]: time="2025-03-17T17:44:05.108811963Z" level=info msg="RemovePodSandbox for \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:44:05.108834 containerd[1511]: time="2025-03-17T17:44:05.108831731Z" level=info msg="Forcibly stopping sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\"" Mar 17 17:44:05.108916 containerd[1511]: time="2025-03-17T17:44:05.108897399Z" level=info msg="TearDown network for sandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" successfully" Mar 17 17:44:05.112597 containerd[1511]: time="2025-03-17T17:44:05.112543826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.112656 containerd[1511]: time="2025-03-17T17:44:05.112599473Z" level=info msg="RemovePodSandbox \"a6fd5ae8d8c37d968bd04700971eb2886495554432a091f2befb520f8b34c501\" returns successfully" Mar 17 17:44:05.112995 containerd[1511]: time="2025-03-17T17:44:05.112963958Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" Mar 17 17:44:05.113095 containerd[1511]: time="2025-03-17T17:44:05.113073371Z" level=info msg="TearDown network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" successfully" Mar 17 17:44:05.113095 containerd[1511]: time="2025-03-17T17:44:05.113090925Z" level=info msg="StopPodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" returns successfully" Mar 17 17:44:05.113344 containerd[1511]: time="2025-03-17T17:44:05.113319217Z" level=info msg="RemovePodSandbox for \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" Mar 17 17:44:05.113344 containerd[1511]: time="2025-03-17T17:44:05.113342101Z" level=info msg="Forcibly stopping sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\"" Mar 17 17:44:05.113463 containerd[1511]: time="2025-03-17T17:44:05.113424209Z" level=info msg="TearDown network for sandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" successfully" Mar 17 17:44:05.118967 containerd[1511]: time="2025-03-17T17:44:05.118926519Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.119073 containerd[1511]: time="2025-03-17T17:44:05.118971746Z" level=info msg="RemovePodSandbox \"f00b51c06514b707b4929dbc9613099efe7622f3f5df69b8af03440ae18de24e\" returns successfully" Mar 17 17:44:05.119376 containerd[1511]: time="2025-03-17T17:44:05.119340600Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\"" Mar 17 17:44:05.119519 containerd[1511]: time="2025-03-17T17:44:05.119445082Z" level=info msg="TearDown network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" successfully" Mar 17 17:44:05.119519 containerd[1511]: time="2025-03-17T17:44:05.119499557Z" level=info msg="StopPodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" returns successfully" Mar 17 17:44:05.119755 containerd[1511]: time="2025-03-17T17:44:05.119732839Z" level=info msg="RemovePodSandbox for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\"" Mar 17 17:44:05.119788 containerd[1511]: time="2025-03-17T17:44:05.119755994Z" level=info msg="Forcibly stopping sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\"" Mar 17 17:44:05.119867 containerd[1511]: time="2025-03-17T17:44:05.119833443Z" level=info msg="TearDown network for sandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" successfully" Mar 17 17:44:05.124363 containerd[1511]: time="2025-03-17T17:44:05.124314416Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.124363 containerd[1511]: time="2025-03-17T17:44:05.124356026Z" level=info msg="RemovePodSandbox \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" returns successfully" Mar 17 17:44:05.124570 containerd[1511]: time="2025-03-17T17:44:05.124377018Z" level=error msg="PodSandboxStatus for \"ad136815a7bcf9fb1ccac4aab59793996e94f29292589abc3eed912ef7bb109d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find sandbox: not found" Mar 17 17:44:05.124729 containerd[1511]: time="2025-03-17T17:44:05.124700103Z" level=info msg="StopPodSandbox for \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\"" Mar 17 17:44:05.124829 containerd[1511]: time="2025-03-17T17:44:05.124806348Z" level=info msg="TearDown network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" successfully" Mar 17 17:44:05.124861 containerd[1511]: time="2025-03-17T17:44:05.124827970Z" level=info msg="StopPodSandbox for \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" returns successfully" Mar 17 17:44:05.125101 containerd[1511]: time="2025-03-17T17:44:05.125072885Z" level=info msg="RemovePodSandbox for \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\"" Mar 17 17:44:05.125101 containerd[1511]: time="2025-03-17T17:44:05.125097231Z" level=info msg="Forcibly stopping sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\"" Mar 17 17:44:05.125226 containerd[1511]: time="2025-03-17T17:44:05.125188748Z" level=info msg="TearDown network for sandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" successfully" Mar 17 17:44:05.129329 containerd[1511]: time="2025-03-17T17:44:05.129291939Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.129377 containerd[1511]: time="2025-03-17T17:44:05.129334432Z" level=info msg="RemovePodSandbox \"f5f9f9783e0b5b866f4eaa0562e32aef18c69ba4d0cce64b35940cd8c51f233a\" returns successfully" Mar 17 17:44:05.129630 containerd[1511]: time="2025-03-17T17:44:05.129608231Z" level=info msg="StopPodSandbox for \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\"" Mar 17 17:44:05.129727 containerd[1511]: time="2025-03-17T17:44:05.129709217Z" level=info msg="TearDown network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\" successfully" Mar 17 17:44:05.129756 containerd[1511]: time="2025-03-17T17:44:05.129727071Z" level=info msg="StopPodSandbox for \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\" returns successfully" Mar 17 17:44:05.131832 containerd[1511]: time="2025-03-17T17:44:05.129944463Z" level=info msg="RemovePodSandbox for \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\"" Mar 17 17:44:05.131832 containerd[1511]: time="2025-03-17T17:44:05.129964311Z" level=info msg="Forcibly stopping sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\"" Mar 17 17:44:05.131832 containerd[1511]: time="2025-03-17T17:44:05.130028846Z" level=info msg="TearDown network for sandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\" successfully" Mar 17 17:44:05.134444 containerd[1511]: time="2025-03-17T17:44:05.134410125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.134503 containerd[1511]: time="2025-03-17T17:44:05.134457056Z" level=info msg="RemovePodSandbox \"d02b3133b2cc39eef5164fd20986920d81038c967b1c062931d800ccbd0fd0e2\" returns successfully" Mar 17 17:44:05.134782 containerd[1511]: time="2025-03-17T17:44:05.134746105Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:44:05.134863 containerd[1511]: time="2025-03-17T17:44:05.134845347Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:44:05.134887 containerd[1511]: time="2025-03-17T17:44:05.134863432Z" level=info msg="StopPodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:44:05.135121 containerd[1511]: time="2025-03-17T17:44:05.135098136Z" level=info msg="RemovePodSandbox for \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:44:05.135158 containerd[1511]: time="2025-03-17T17:44:05.135120801Z" level=info msg="Forcibly stopping sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\"" Mar 17 17:44:05.135220 containerd[1511]: time="2025-03-17T17:44:05.135193702Z" level=info msg="TearDown network for sandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" successfully" Mar 17 17:44:05.139073 containerd[1511]: time="2025-03-17T17:44:05.139041429Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.139196 containerd[1511]: time="2025-03-17T17:44:05.139087247Z" level=info msg="RemovePodSandbox \"4ce107ba3ec9f31d7b3420825039f0a23e3438346875ccd104dc7f519f537446\" returns successfully" Mar 17 17:44:05.139423 containerd[1511]: time="2025-03-17T17:44:05.139395925Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:44:05.139529 containerd[1511]: time="2025-03-17T17:44:05.139504545Z" level=info msg="TearDown network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" successfully" Mar 17 17:44:05.139529 containerd[1511]: time="2025-03-17T17:44:05.139524253Z" level=info msg="StopPodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" returns successfully" Mar 17 17:44:05.139817 containerd[1511]: time="2025-03-17T17:44:05.139795458Z" level=info msg="RemovePodSandbox for \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:44:05.139862 containerd[1511]: time="2025-03-17T17:44:05.139819624Z" level=info msg="Forcibly stopping sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\"" Mar 17 17:44:05.139940 containerd[1511]: time="2025-03-17T17:44:05.139909218Z" level=info msg="TearDown network for sandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" successfully" Mar 17 17:44:05.143686 containerd[1511]: time="2025-03-17T17:44:05.143646111Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.143738 containerd[1511]: time="2025-03-17T17:44:05.143694855Z" level=info msg="RemovePodSandbox \"e1822912a2888b680ea7f9bf3d6dfd91c2c6eda94d136c98ac1343d40214f70b\" returns successfully" Mar 17 17:44:05.143964 containerd[1511]: time="2025-03-17T17:44:05.143941822Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" Mar 17 17:44:05.144058 containerd[1511]: time="2025-03-17T17:44:05.144040974Z" level=info msg="TearDown network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" successfully" Mar 17 17:44:05.144089 containerd[1511]: time="2025-03-17T17:44:05.144058849Z" level=info msg="StopPodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" returns successfully" Mar 17 17:44:05.144350 containerd[1511]: time="2025-03-17T17:44:05.144325685Z" level=info msg="RemovePodSandbox for \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" Mar 17 17:44:05.144393 containerd[1511]: time="2025-03-17T17:44:05.144353359Z" level=info msg="Forcibly stopping sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\"" Mar 17 17:44:05.144461 containerd[1511]: time="2025-03-17T17:44:05.144432693Z" level=info msg="TearDown network for sandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" successfully" Mar 17 17:44:05.148921 containerd[1511]: time="2025-03-17T17:44:05.148865232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.148921 containerd[1511]: time="2025-03-17T17:44:05.148924055Z" level=info msg="RemovePodSandbox \"505868a5c076596de5c7575902f86b043b6cdab84fc363153c09421557adfba8\" returns successfully" Mar 17 17:44:05.149443 containerd[1511]: time="2025-03-17T17:44:05.149418252Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\"" Mar 17 17:44:05.149590 containerd[1511]: time="2025-03-17T17:44:05.149533215Z" level=info msg="TearDown network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" successfully" Mar 17 17:44:05.149590 containerd[1511]: time="2025-03-17T17:44:05.149580657Z" level=info msg="StopPodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" returns successfully" Mar 17 17:44:05.149867 containerd[1511]: time="2025-03-17T17:44:05.149839748Z" level=info msg="RemovePodSandbox for \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\"" Mar 17 17:44:05.149912 containerd[1511]: time="2025-03-17T17:44:05.149866480Z" level=info msg="Forcibly stopping sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\"" Mar 17 17:44:05.149982 containerd[1511]: time="2025-03-17T17:44:05.149945382Z" level=info msg="TearDown network for sandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" successfully" Mar 17 17:44:05.153975 containerd[1511]: time="2025-03-17T17:44:05.153869688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.153975 containerd[1511]: time="2025-03-17T17:44:05.153913061Z" level=info msg="RemovePodSandbox \"de0bb5eed656cfb50f81396bff87ac24dd133f83c813548d59ca3102fe00ab61\" returns successfully" Mar 17 17:44:05.154278 containerd[1511]: time="2025-03-17T17:44:05.154149478Z" level=info msg="StopPodSandbox for \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\"" Mar 17 17:44:05.154278 containerd[1511]: time="2025-03-17T17:44:05.154232279Z" level=info msg="TearDown network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" successfully" Mar 17 17:44:05.154278 containerd[1511]: time="2025-03-17T17:44:05.154256316Z" level=info msg="StopPodSandbox for \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" returns successfully" Mar 17 17:44:05.154614 containerd[1511]: time="2025-03-17T17:44:05.154571035Z" level=info msg="RemovePodSandbox for \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\"" Mar 17 17:44:05.154614 containerd[1511]: time="2025-03-17T17:44:05.154606974Z" level=info msg="Forcibly stopping sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\"" Mar 17 17:44:05.154748 containerd[1511]: time="2025-03-17T17:44:05.154709252Z" level=info msg="TearDown network for sandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" successfully" Mar 17 17:44:05.158870 containerd[1511]: time="2025-03-17T17:44:05.158839656Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.158930 containerd[1511]: time="2025-03-17T17:44:05.158879022Z" level=info msg="RemovePodSandbox \"dadd7feac938af28c5af276dc4bc8b7db7afc807fcad2a12367eab948774b513\" returns successfully" Mar 17 17:44:05.159172 containerd[1511]: time="2025-03-17T17:44:05.159142953Z" level=info msg="StopPodSandbox for \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\"" Mar 17 17:44:05.159335 containerd[1511]: time="2025-03-17T17:44:05.159275991Z" level=info msg="TearDown network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\" successfully" Mar 17 17:44:05.159335 containerd[1511]: time="2025-03-17T17:44:05.159322029Z" level=info msg="StopPodSandbox for \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\" returns successfully" Mar 17 17:44:05.159979 containerd[1511]: time="2025-03-17T17:44:05.159593225Z" level=info msg="RemovePodSandbox for \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\"" Mar 17 17:44:05.159979 containerd[1511]: time="2025-03-17T17:44:05.159618203Z" level=info msg="Forcibly stopping sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\"" Mar 17 17:44:05.159979 containerd[1511]: time="2025-03-17T17:44:05.159710852Z" level=info msg="TearDown network for sandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\" successfully" Mar 17 17:44:05.163963 containerd[1511]: time="2025-03-17T17:44:05.163931391Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:44:05.164034 containerd[1511]: time="2025-03-17T17:44:05.163966388Z" level=info msg="RemovePodSandbox \"b70888017c9929fd44585839e79bef8b8018f5bb950a2aa4f960f242ce269b91\" returns successfully" Mar 17 17:44:05.833200 kubelet[2608]: I0317 17:44:05.833131 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:44:08.266634 systemd[1]: Started sshd@19-10.0.0.14:22-10.0.0.1:54084.service - OpenSSH per-connection server daemon (10.0.0.1:54084). Mar 17 17:44:08.308662 sshd[6195]: Accepted publickey for core from 10.0.0.1 port 54084 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:08.310370 sshd-session[6195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:08.315416 systemd-logind[1494]: New session 20 of user core. Mar 17 17:44:08.325410 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:44:08.440195 sshd[6197]: Connection closed by 10.0.0.1 port 54084 Mar 17 17:44:08.440708 sshd-session[6195]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:08.445046 systemd[1]: sshd@19-10.0.0.14:22-10.0.0.1:54084.service: Deactivated successfully. Mar 17 17:44:08.447703 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:44:08.448648 systemd-logind[1494]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:44:08.449649 systemd-logind[1494]: Removed session 20. Mar 17 17:44:11.676876 kubelet[2608]: E0317 17:44:11.676819 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:44:13.453870 systemd[1]: Started sshd@20-10.0.0.14:22-10.0.0.1:54096.service - OpenSSH per-connection server daemon (10.0.0.1:54096). Mar 17 17:44:13.500971 sshd[6218]: Accepted publickey for core from 10.0.0.1 port 54096 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:13.502626 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:13.507538 systemd-logind[1494]: New session 21 of user core. Mar 17 17:44:13.517395 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:44:13.629874 sshd[6220]: Connection closed by 10.0.0.1 port 54096 Mar 17 17:44:13.630326 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:13.634711 systemd[1]: sshd@20-10.0.0.14:22-10.0.0.1:54096.service: Deactivated successfully. Mar 17 17:44:13.636966 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:44:13.637813 systemd-logind[1494]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:44:13.638734 systemd-logind[1494]: Removed session 21. Mar 17 17:44:16.274889 kubelet[2608]: I0317 17:44:16.274831 2608 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:44:18.645398 systemd[1]: Started sshd@21-10.0.0.14:22-10.0.0.1:33596.service - OpenSSH per-connection server daemon (10.0.0.1:33596). Mar 17 17:44:18.695434 sshd[6235]: Accepted publickey for core from 10.0.0.1 port 33596 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:18.697378 sshd-session[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:18.702617 systemd-logind[1494]: New session 22 of user core. Mar 17 17:44:18.711493 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:44:18.831107 sshd[6237]: Connection closed by 10.0.0.1 port 33596 Mar 17 17:44:18.831598 sshd-session[6235]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:18.836108 systemd[1]: sshd@21-10.0.0.14:22-10.0.0.1:33596.service: Deactivated successfully. Mar 17 17:44:18.838931 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:44:18.840363 systemd-logind[1494]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:44:18.841760 systemd-logind[1494]: Removed session 22. Mar 17 17:44:21.676796 kubelet[2608]: E0317 17:44:21.676734 2608 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:44:23.852478 systemd[1]: Started sshd@22-10.0.0.14:22-10.0.0.1:33610.service - OpenSSH per-connection server daemon (10.0.0.1:33610). Mar 17 17:44:23.894263 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 33610 ssh2: RSA SHA256:sryA1eJfWIez1lK1HnLUziwsxLkQurNjq13plyk0DV0 Mar 17 17:44:23.896369 sshd-session[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:44:23.901217 systemd-logind[1494]: New session 23 of user core. Mar 17 17:44:23.913404 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:44:24.040224 sshd[6271]: Connection closed by 10.0.0.1 port 33610 Mar 17 17:44:24.040739 sshd-session[6269]: pam_unix(sshd:session): session closed for user core Mar 17 17:44:24.044783 systemd[1]: sshd@22-10.0.0.14:22-10.0.0.1:33610.service: Deactivated successfully. Mar 17 17:44:24.046948 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:44:24.047831 systemd-logind[1494]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:44:24.048864 systemd-logind[1494]: Removed session 23.