Mar 17 17:41:07.941275 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:41:07.941297 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:41:07.941309 kernel: BIOS-provided physical RAM map: Mar 17 17:41:07.941316 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:41:07.941322 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:41:07.941328 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:41:07.941336 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 17:41:07.941342 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 17:41:07.941349 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:41:07.941357 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:41:07.941364 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:41:07.941374 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:41:07.941381 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:41:07.941388 kernel: NX (Execute Disable) protection: active Mar 17 17:41:07.941396 kernel: APIC: Static calls initialized Mar 17 17:41:07.941408 kernel: SMBIOS 2.8 present. Mar 17 17:41:07.941415 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 17:41:07.941422 kernel: Hypervisor detected: KVM Mar 17 17:41:07.941429 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:41:07.941436 kernel: kvm-clock: using sched offset of 2984363983 cycles Mar 17 17:41:07.941443 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:41:07.941450 kernel: tsc: Detected 2794.750 MHz processor Mar 17 17:41:07.941458 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:41:07.941465 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:41:07.941472 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 17:41:07.941482 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:41:07.941489 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:41:07.941496 kernel: Using GB pages for direct mapping Mar 17 17:41:07.941504 kernel: ACPI: Early table checksum verification disabled Mar 17 17:41:07.941511 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 17:41:07.941525 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:07.941533 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:07.941540 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:07.941551 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 17:41:07.941558 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:07.941565 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:07.941572 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:07.941579 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:41:07.941586 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 17:41:07.941594 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 17:41:07.941604 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 17:41:07.941614 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 17:41:07.941624 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 17:41:07.941632 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 17:41:07.941639 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 17:41:07.941647 kernel: No NUMA configuration found Mar 17 17:41:07.941654 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 17:41:07.941661 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 17:41:07.941671 kernel: Zone ranges: Mar 17 17:41:07.941679 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:41:07.941686 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 17:41:07.941693 kernel: Normal empty Mar 17 17:41:07.941701 kernel: Movable zone start for each node Mar 17 17:41:07.941708 kernel: Early memory node ranges Mar 17 17:41:07.941716 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:41:07.941723 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 17:41:07.941730 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 17:41:07.941740 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:41:07.941750 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:41:07.941758 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:41:07.941765 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:41:07.941772 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:41:07.941780 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:41:07.941787 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:41:07.941794 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:41:07.941802 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:41:07.941812 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:41:07.941819 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:41:07.941827 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:41:07.941834 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:41:07.941843 kernel: TSC deadline timer available Mar 17 17:41:07.941852 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:41:07.941861 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:41:07.941870 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:41:07.941883 kernel: kvm-guest: setup PV sched yield Mar 17 17:41:07.941893 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:41:07.941906 kernel: Booting paravirtualized kernel on KVM Mar 17 17:41:07.941915 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:41:07.941925 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:41:07.941934 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:41:07.941944 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:41:07.941953 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:41:07.941963 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:41:07.941973 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:41:07.941983 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:41:07.941994 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:41:07.942002 kernel: random: crng init done Mar 17 17:41:07.942009 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:41:07.942017 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:41:07.942024 kernel: Fallback order for Node 0: 0 Mar 17 17:41:07.942032 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 17:41:07.942039 kernel: Policy zone: DMA32 Mar 17 17:41:07.942046 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:41:07.942057 kernel: Memory: 2434588K/2571752K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 136904K reserved, 0K cma-reserved) Mar 17 17:41:07.942091 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:41:07.942098 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:41:07.942106 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:41:07.942113 kernel: Dynamic Preempt: voluntary Mar 17 17:41:07.942121 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:41:07.942133 kernel: rcu: RCU event tracing is enabled. Mar 17 17:41:07.942141 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:41:07.942149 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:41:07.942159 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:41:07.942167 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:41:07.942177 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:41:07.942185 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:41:07.942192 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:41:07.942200 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:41:07.942207 kernel: Console: colour VGA+ 80x25 Mar 17 17:41:07.942214 kernel: printk: console [ttyS0] enabled Mar 17 17:41:07.942222 kernel: ACPI: Core revision 20230628 Mar 17 17:41:07.942232 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:41:07.942239 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:41:07.942247 kernel: x2apic enabled Mar 17 17:41:07.942254 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:41:07.942261 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:41:07.942269 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:41:07.942276 kernel: kvm-guest: setup PV IPIs Mar 17 17:41:07.942294 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:41:07.942302 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:41:07.942310 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 17:41:07.942318 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:41:07.942325 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:41:07.942335 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:41:07.942343 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:41:07.942351 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:41:07.942359 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:41:07.942369 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:41:07.942377 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:41:07.942387 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:41:07.942395 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:41:07.942403 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:41:07.942411 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:41:07.942419 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:41:07.942427 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:41:07.942435 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:41:07.942445 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:41:07.942453 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:41:07.942461 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:41:07.942469 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:41:07.942476 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:41:07.942484 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:41:07.942492 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:41:07.942499 kernel: landlock: Up and running. Mar 17 17:41:07.942507 kernel: SELinux: Initializing. Mar 17 17:41:07.942525 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:41:07.942533 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:41:07.942541 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:41:07.942548 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:41:07.942557 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:41:07.942565 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:41:07.942575 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:41:07.942583 kernel: ... version: 0 Mar 17 17:41:07.942593 kernel: ... bit width: 48 Mar 17 17:41:07.942601 kernel: ... generic registers: 6 Mar 17 17:41:07.942609 kernel: ... value mask: 0000ffffffffffff Mar 17 17:41:07.942617 kernel: ... max period: 00007fffffffffff Mar 17 17:41:07.942624 kernel: ... fixed-purpose events: 0 Mar 17 17:41:07.942632 kernel: ... event mask: 000000000000003f Mar 17 17:41:07.942640 kernel: signal: max sigframe size: 1776 Mar 17 17:41:07.942647 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:41:07.942655 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:41:07.942663 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:41:07.942673 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:41:07.942681 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:41:07.942689 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:41:07.942696 kernel: smpboot: Max logical packages: 1 Mar 17 17:41:07.942704 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 17:41:07.942712 kernel: devtmpfs: initialized Mar 17 17:41:07.942719 kernel: x86/mm: Memory block size: 128MB Mar 17 17:41:07.942727 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:41:07.942735 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:41:07.942746 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:41:07.942753 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:41:07.942761 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:41:07.942769 kernel: audit: type=2000 audit(1742233266.676:1): state=initialized audit_enabled=0 res=1 Mar 17 17:41:07.942776 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:41:07.942784 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:41:07.942792 kernel: cpuidle: using governor menu Mar 17 17:41:07.942800 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:41:07.942807 kernel: dca service started, version 1.12.1 Mar 17 17:41:07.942818 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:41:07.942826 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 17 17:41:07.942833 kernel: PCI: Using configuration type 1 for base access Mar 17 17:41:07.942841 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:41:07.942849 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:41:07.942857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:41:07.942864 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:41:07.942872 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:41:07.942880 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:41:07.942890 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:41:07.942898 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:41:07.942906 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:41:07.942913 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:41:07.942921 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:41:07.942928 kernel: ACPI: Interpreter enabled Mar 17 17:41:07.942936 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:41:07.942944 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:41:07.942952 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:41:07.942962 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:41:07.942969 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:41:07.942977 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:41:07.943249 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:41:07.943401 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:41:07.943548 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:41:07.943562 kernel: PCI host bridge to bus 0000:00 Mar 17 17:41:07.943723 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:41:07.943844 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:41:07.943968 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:41:07.944102 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 17:41:07.944320 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:41:07.944438 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:41:07.944566 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:41:07.944728 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:41:07.944872 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:41:07.945001 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 17:41:07.945162 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 17:41:07.945291 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 17:41:07.945419 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:41:07.945582 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:41:07.945713 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 17:41:07.945872 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 17:41:07.946057 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 17:41:07.946275 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:41:07.946457 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:41:07.946603 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 17:41:07.946738 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 17:41:07.946911 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:41:07.947045 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 17:41:07.947212 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 17:41:07.947343 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 17:41:07.947471 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 17:41:07.947649 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:41:07.947805 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:41:07.947951 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:41:07.948149 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 17:41:07.948290 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 17:41:07.948437 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:41:07.948578 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:41:07.948589 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:41:07.948603 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:41:07.948611 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:41:07.948619 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:41:07.948627 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:41:07.948635 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:41:07.948643 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:41:07.948650 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:41:07.948658 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:41:07.948666 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:41:07.948677 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:41:07.948685 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:41:07.948693 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:41:07.948700 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:41:07.948708 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:41:07.948716 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:41:07.948724 kernel: iommu: Default domain type: Translated Mar 17 17:41:07.948732 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:41:07.948741 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:41:07.948756 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:41:07.948766 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:41:07.948776 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 17:41:07.948924 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:41:07.949090 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:41:07.949276 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:41:07.949290 kernel: vgaarb: loaded Mar 17 17:41:07.949298 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:41:07.949311 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:41:07.949319 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:41:07.949327 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:41:07.949335 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:41:07.949343 kernel: pnp: PnP ACPI init Mar 17 17:41:07.949511 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:41:07.949533 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:41:07.949541 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:41:07.949553 kernel: NET: Registered PF_INET protocol family Mar 17 17:41:07.949561 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:41:07.949569 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:41:07.949577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:41:07.949585 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:41:07.949593 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:41:07.949601 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:41:07.949609 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:41:07.949617 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:41:07.949627 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:41:07.949635 kernel: NET: Registered PF_XDP protocol family Mar 17 17:41:07.949759 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:41:07.949876 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:41:07.949992 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:41:07.950126 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 17:41:07.950243 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:41:07.950359 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:41:07.950373 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:41:07.950381 kernel: Initialise system trusted keyrings Mar 17 17:41:07.950389 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:41:07.950397 kernel: Key type asymmetric registered Mar 17 17:41:07.950404 kernel: Asymmetric key parser 'x509' registered Mar 17 17:41:07.950412 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:41:07.950420 kernel: io scheduler mq-deadline registered Mar 17 17:41:07.950428 kernel: io scheduler kyber registered Mar 17 17:41:07.950436 kernel: io scheduler bfq registered Mar 17 17:41:07.950446 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:41:07.950454 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:41:07.950462 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:41:07.950470 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:41:07.950478 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:41:07.950486 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:41:07.950494 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:41:07.950502 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:41:07.950510 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:41:07.950695 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:41:07.950820 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:41:07.950831 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:41:07.950950 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:41:07 UTC (1742233267) Mar 17 17:41:07.951088 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:41:07.951099 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:41:07.951107 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:41:07.951115 kernel: Segment Routing with IPv6 Mar 17 17:41:07.951127 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:41:07.951135 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:41:07.951143 kernel: Key type dns_resolver registered Mar 17 17:41:07.951151 kernel: IPI shorthand broadcast: enabled Mar 17 17:41:07.951158 kernel: sched_clock: Marking stable (697002555, 195207238)->(1076320090, -184110297) Mar 17 17:41:07.951166 kernel: registered taskstats version 1 Mar 17 17:41:07.951174 kernel: Loading compiled-in X.509 certificates Mar 17 17:41:07.951182 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:41:07.951190 kernel: Key type .fscrypt registered Mar 17 17:41:07.951201 kernel: Key type fscrypt-provisioning registered Mar 17 17:41:07.951209 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:41:07.951216 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:41:07.951224 kernel: ima: No architecture policies found Mar 17 17:41:07.951232 kernel: clk: Disabling unused clocks Mar 17 17:41:07.951239 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:41:07.951247 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:41:07.951255 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:41:07.951263 kernel: Run /init as init process Mar 17 17:41:07.951273 kernel: with arguments: Mar 17 17:41:07.951281 kernel: /init Mar 17 17:41:07.951289 kernel: with environment: Mar 17 17:41:07.951296 kernel: HOME=/ Mar 17 17:41:07.951304 kernel: TERM=linux Mar 17 17:41:07.951311 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:41:07.951321 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:41:07.951331 systemd[1]: Detected virtualization kvm. Mar 17 17:41:07.951343 systemd[1]: Detected architecture x86-64. Mar 17 17:41:07.951351 systemd[1]: Running in initrd. Mar 17 17:41:07.951359 systemd[1]: No hostname configured, using default hostname. Mar 17 17:41:07.951368 systemd[1]: Hostname set to . Mar 17 17:41:07.951376 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:41:07.951384 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:41:07.951393 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:07.951401 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:07.951413 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:41:07.951435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:41:07.951447 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:41:07.951456 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:41:07.951466 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:41:07.951477 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:41:07.951486 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:07.951494 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:07.951503 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:41:07.951512 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:41:07.951529 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:41:07.951538 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:41:07.951546 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:41:07.951558 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:41:07.951569 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:41:07.951577 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:41:07.951586 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:07.951595 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:07.951603 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:07.951611 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:41:07.951620 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:41:07.951632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:41:07.951640 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:41:07.951649 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:41:07.951657 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:41:07.951666 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:41:07.951674 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:07.951682 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:41:07.951691 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:07.951699 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:41:07.951731 systemd-journald[193]: Collecting audit messages is disabled. Mar 17 17:41:07.951755 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:41:07.951767 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:41:07.951780 systemd-journald[193]: Journal started Mar 17 17:41:07.951801 systemd-journald[193]: Runtime Journal (/run/log/journal/d056b50e7a844ffab25f2d26e546dce1) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:41:07.946432 systemd-modules-load[195]: Inserted module 'overlay' Mar 17 17:41:07.986124 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:41:07.986145 kernel: Bridge firewalling registered Mar 17 17:41:07.979106 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 17 17:41:07.988238 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:41:07.989566 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:07.991966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:08.017293 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:41:08.019923 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:41:08.021078 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:41:08.022854 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:41:08.034059 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:08.037319 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:08.041724 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:41:08.044151 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:08.049496 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:08.057306 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:41:08.059467 dracut-cmdline[224]: dracut-dracut-053 Mar 17 17:41:08.061789 dracut-cmdline[224]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:41:08.100165 systemd-resolved[232]: Positive Trust Anchors: Mar 17 17:41:08.100183 systemd-resolved[232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:41:08.100221 systemd-resolved[232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:41:08.111185 systemd-resolved[232]: Defaulting to hostname 'linux'. Mar 17 17:41:08.117835 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:41:08.118492 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:08.149096 kernel: SCSI subsystem initialized Mar 17 17:41:08.159102 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:41:08.170101 kernel: iscsi: registered transport (tcp) Mar 17 17:41:08.193103 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:41:08.193147 kernel: QLogic iSCSI HBA Driver Mar 17 17:41:08.247324 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:41:08.259215 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:41:08.287111 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:41:08.287176 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:41:08.297614 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:41:08.341100 kernel: raid6: avx2x4 gen() 29170 MB/s Mar 17 17:41:08.358086 kernel: raid6: avx2x2 gen() 29951 MB/s Mar 17 17:41:08.375187 kernel: raid6: avx2x1 gen() 25256 MB/s Mar 17 17:41:08.375220 kernel: raid6: using algorithm avx2x2 gen() 29951 MB/s Mar 17 17:41:08.395089 kernel: raid6: .... xor() 19678 MB/s, rmw enabled Mar 17 17:41:08.395119 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:41:08.416119 kernel: xor: automatically using best checksumming function avx Mar 17 17:41:08.583126 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:41:08.599824 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:41:08.610328 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:08.625161 systemd-udevd[415]: Using default interface naming scheme 'v255'. Mar 17 17:41:08.630347 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:08.640240 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:41:08.655252 dracut-pre-trigger[424]: rd.md=0: removing MD RAID activation Mar 17 17:41:08.689855 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:41:08.702265 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:41:08.771489 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:08.783464 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:41:08.795949 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:41:08.798893 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:41:08.802990 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:08.805407 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:41:08.815143 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:41:08.833996 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:41:08.834299 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:41:08.834324 kernel: GPT:9289727 != 19775487 Mar 17 17:41:08.834352 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:41:08.834381 kernel: GPT:9289727 != 19775487 Mar 17 17:41:08.834401 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:41:08.834439 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:41:08.817928 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:41:08.835304 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:41:08.841099 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:41:08.841709 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:41:08.846134 kernel: libata version 3.00 loaded. Mar 17 17:41:08.841845 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:08.848022 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:41:08.853772 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:41:08.853795 kernel: AES CTR mode by8 optimization enabled Mar 17 17:41:08.849341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:41:08.849559 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:08.853786 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:08.860418 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:41:08.888776 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:41:08.888794 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:41:08.888956 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:41:08.889127 kernel: scsi host0: ahci Mar 17 17:41:08.889291 kernel: scsi host1: ahci Mar 17 17:41:08.889454 kernel: scsi host2: ahci Mar 17 17:41:08.889617 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Mar 17 17:41:08.889630 kernel: scsi host3: ahci Mar 17 17:41:08.889782 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (460) Mar 17 17:41:08.889795 kernel: scsi host4: ahci Mar 17 17:41:08.889954 kernel: scsi host5: ahci Mar 17 17:41:08.890127 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 17:41:08.890139 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 17:41:08.890150 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 17:41:08.890160 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 17:41:08.890171 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 17:41:08.890181 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 17:41:08.868443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:08.892587 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:41:08.926486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:41:08.929365 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:08.939132 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:41:08.943216 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:41:08.943479 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:41:08.958211 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:41:08.959465 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:41:08.983667 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:09.151719 disk-uuid[557]: Primary Header is updated. Mar 17 17:41:09.151719 disk-uuid[557]: Secondary Entries is updated. Mar 17 17:41:09.151719 disk-uuid[557]: Secondary Header is updated. Mar 17 17:41:09.157105 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:41:09.163112 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:41:09.198094 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:41:09.198158 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:41:09.199087 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:41:09.201327 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:41:09.201365 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:41:09.202086 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:41:09.204282 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:41:09.204309 kernel: ata3.00: applying bridge limits Mar 17 17:41:09.205225 kernel: ata3.00: configured for UDMA/100 Mar 17 17:41:09.205252 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:41:09.259099 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:41:09.284988 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:41:09.285006 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:41:10.164091 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:41:10.164570 disk-uuid[566]: The operation has completed successfully. Mar 17 17:41:10.195575 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:41:10.195721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:41:10.223311 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:41:10.227299 sh[594]: Success Mar 17 17:41:10.239080 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:41:10.270895 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:41:10.292972 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:41:10.296773 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:41:10.307789 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:41:10.307825 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:41:10.307836 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:41:10.308805 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:41:10.326094 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:41:10.330621 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:41:10.332965 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:41:10.350218 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:41:10.352934 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:41:10.362777 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:41:10.362827 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:41:10.362839 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:41:10.366090 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:41:10.374939 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:41:10.376777 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:41:10.458689 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:41:10.475284 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:41:10.499858 systemd-networkd[772]: lo: Link UP Mar 17 17:41:10.499873 systemd-networkd[772]: lo: Gained carrier Mar 17 17:41:10.501722 systemd-networkd[772]: Enumeration completed Mar 17 17:41:10.502135 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:10.502139 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:41:10.503325 systemd-networkd[772]: eth0: Link UP Mar 17 17:41:10.503329 systemd-networkd[772]: eth0: Gained carrier Mar 17 17:41:10.503337 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:10.503728 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:41:10.515199 systemd[1]: Reached target network.target - Network. Mar 17 17:41:10.531180 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:41:10.615331 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:41:10.628412 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:41:10.690773 ignition[777]: Ignition 2.20.0 Mar 17 17:41:10.690788 ignition[777]: Stage: fetch-offline Mar 17 17:41:10.690837 ignition[777]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:10.690849 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:10.690986 ignition[777]: parsed url from cmdline: "" Mar 17 17:41:10.690992 ignition[777]: no config URL provided Mar 17 17:41:10.690999 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:41:10.691011 ignition[777]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:41:10.691046 ignition[777]: op(1): [started] loading QEMU firmware config module Mar 17 17:41:10.691052 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:41:10.706288 ignition[777]: op(1): [finished] loading QEMU firmware config module Mar 17 17:41:10.706331 ignition[777]: QEMU firmware config was not found. Ignoring... Mar 17 17:41:10.755321 ignition[777]: parsing config with SHA512: 53590de5209c77e2d90dd4c8538709d141e1fd2d733521120d5f22a0e21dbcf4a5685a6fa6707cdcb4fbd7ca0e8369eb86a3a6dfc5a3c4c52d8274d57a8d60fe Mar 17 17:41:10.761163 unknown[777]: fetched base config from "system" Mar 17 17:41:10.761185 unknown[777]: fetched user config from "qemu" Mar 17 17:41:10.762109 ignition[777]: fetch-offline: fetch-offline passed Mar 17 17:41:10.762325 ignition[777]: Ignition finished successfully Mar 17 17:41:10.765295 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:41:10.767608 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:41:10.778301 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:41:10.795689 ignition[788]: Ignition 2.20.0 Mar 17 17:41:10.795704 ignition[788]: Stage: kargs Mar 17 17:41:10.795931 ignition[788]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:10.795946 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:10.797187 ignition[788]: kargs: kargs passed Mar 17 17:41:10.797257 ignition[788]: Ignition finished successfully Mar 17 17:41:10.801454 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:41:10.817552 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:41:10.834598 ignition[797]: Ignition 2.20.0 Mar 17 17:41:10.834622 ignition[797]: Stage: disks Mar 17 17:41:10.834850 ignition[797]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:10.834866 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:10.835888 ignition[797]: disks: disks passed Mar 17 17:41:10.839692 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:41:10.835950 ignition[797]: Ignition finished successfully Mar 17 17:41:10.842086 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:41:10.843664 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:41:10.845869 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:41:10.847309 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:41:10.848682 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:41:10.866161 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:41:10.900676 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:41:10.918899 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:41:10.935346 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:41:11.107103 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:41:11.108148 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:41:11.110676 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:41:11.126330 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:41:11.130495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:41:11.133264 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:41:11.133337 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:41:11.133374 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:41:11.142111 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (817) Mar 17 17:41:11.144332 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:41:11.148793 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:41:11.148824 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:41:11.148840 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:41:11.151109 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:41:11.158537 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:41:11.163578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:41:11.214858 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:41:11.219804 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:41:11.224437 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:41:11.230133 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:41:11.351950 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:41:11.394350 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:41:11.398038 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:41:11.409104 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:41:11.410502 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:41:11.430921 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:41:11.565572 ignition[934]: INFO : Ignition 2.20.0 Mar 17 17:41:11.565572 ignition[934]: INFO : Stage: mount Mar 17 17:41:11.567804 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:11.567804 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:11.571502 ignition[934]: INFO : mount: mount passed Mar 17 17:41:11.572449 ignition[934]: INFO : Ignition finished successfully Mar 17 17:41:11.575949 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:41:11.587240 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:41:11.599159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:41:11.614108 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (944) Mar 17 17:41:11.614181 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:41:11.615938 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:41:11.615965 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:41:11.619093 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:41:11.621176 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:41:11.649244 ignition[961]: INFO : Ignition 2.20.0 Mar 17 17:41:11.649244 ignition[961]: INFO : Stage: files Mar 17 17:41:11.684092 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:11.684092 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:11.684092 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:41:11.688799 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:41:11.688799 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:41:11.686322 systemd-networkd[772]: eth0: Gained IPv6LL Mar 17 17:41:11.692862 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:41:11.694499 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:41:11.695931 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:41:11.695012 unknown[961]: wrote ssh authorized keys file for user: core Mar 17 17:41:11.698663 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:41:11.698663 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:41:11.738809 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:41:11.861664 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:41:11.861664 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:41:11.865908 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Mar 17 17:41:12.382435 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:41:12.737869 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Mar 17 17:41:12.737869 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:41:12.742121 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:41:12.773177 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:41:12.779921 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:41:12.781666 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:41:12.781666 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:41:12.781666 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:41:12.781666 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:41:12.781666 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:41:12.781666 ignition[961]: INFO : files: files passed Mar 17 17:41:12.781666 ignition[961]: INFO : Ignition finished successfully Mar 17 17:41:12.783684 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:41:12.795308 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:41:12.798314 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:41:12.799987 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:41:12.800136 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:41:12.808874 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:41:12.811868 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:12.811868 initrd-setup-root-after-ignition[992]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:12.814998 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:41:12.814832 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:41:12.816502 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:41:12.828286 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:41:12.860018 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:41:12.860185 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:41:12.862587 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:41:12.864766 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:41:12.866782 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:41:12.868008 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:41:12.889836 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:41:12.921295 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:41:12.931208 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:12.931809 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:12.932174 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:41:12.970018 ignition[1016]: INFO : Ignition 2.20.0 Mar 17 17:41:12.970018 ignition[1016]: INFO : Stage: umount Mar 17 17:41:12.970018 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:41:12.970018 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:41:12.970018 ignition[1016]: INFO : umount: umount passed Mar 17 17:41:12.970018 ignition[1016]: INFO : Ignition finished successfully Mar 17 17:41:12.932511 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:41:12.932665 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:41:12.933332 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:41:12.933710 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:41:12.934029 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:41:12.934402 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:41:12.934717 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:41:12.935034 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:41:12.935354 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:41:12.935701 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:41:12.936031 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:41:12.936368 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:41:12.936704 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:41:12.936865 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:41:12.937765 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:12.938118 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:12.938396 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:41:12.938534 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:12.938919 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:41:12.939032 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:41:12.939766 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:41:12.939925 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:41:12.940561 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:41:12.940938 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:41:12.941196 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:12.941685 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:41:12.941987 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:41:12.942502 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:41:12.942638 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:41:12.943016 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:41:12.943158 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:41:12.943551 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:41:12.943710 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:41:12.944164 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:41:12.944314 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:41:12.945775 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:41:12.946043 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:41:12.946225 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:12.947513 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:41:12.947887 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:41:12.948029 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:12.948515 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:41:12.948656 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:41:12.953499 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:41:12.953619 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:41:12.968694 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:41:12.968846 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:41:12.970201 systemd[1]: Stopped target network.target - Network. Mar 17 17:41:12.971902 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:41:12.971985 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:41:12.973780 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:41:12.973837 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:41:12.975830 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:41:12.975884 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:41:12.978331 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:41:12.978415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:41:12.980528 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:41:12.982467 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:41:12.984120 systemd-networkd[772]: eth0: DHCPv6 lease lost Mar 17 17:41:12.985628 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:41:12.986324 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:41:12.986491 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:41:12.989852 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:41:12.989928 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:13.000211 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:41:13.002282 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:41:13.002356 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:41:13.004827 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:13.007752 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:41:13.007936 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:41:13.020596 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:41:13.020679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:13.022683 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:41:13.022753 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:13.024867 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:41:13.024934 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:13.027699 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:41:13.027942 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:13.030163 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:41:13.030322 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:41:13.033000 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:41:13.033095 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:13.034925 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:41:13.034982 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:13.037087 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:41:13.037144 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:41:13.039583 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:41:13.039639 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:41:13.041354 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:41:13.041427 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:41:13.057265 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:41:13.058660 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:41:13.058739 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:13.061000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:41:13.061057 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:13.065530 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:41:13.065659 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:41:13.617374 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:41:13.617539 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:41:13.619969 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:41:13.621036 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:41:13.621110 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:41:13.637201 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:41:13.647322 systemd[1]: Switching root. Mar 17 17:41:13.724480 systemd-journald[193]: Journal stopped Mar 17 17:41:15.456047 systemd-journald[193]: Received SIGTERM from PID 1 (systemd). Mar 17 17:41:15.456133 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:41:15.456154 kernel: SELinux: policy capability open_perms=1 Mar 17 17:41:15.456166 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:41:15.456177 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:41:15.456189 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:41:15.456200 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:41:15.456216 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:41:15.456227 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:41:15.456239 kernel: audit: type=1403 audit(1742233274.551:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:41:15.456252 systemd[1]: Successfully loaded SELinux policy in 42.687ms. Mar 17 17:41:15.456278 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.182ms. Mar 17 17:41:15.456291 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:41:15.456314 systemd[1]: Detected virtualization kvm. Mar 17 17:41:15.456331 systemd[1]: Detected architecture x86-64. Mar 17 17:41:15.456349 systemd[1]: Detected first boot. Mar 17 17:41:15.456365 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:41:15.456378 zram_generator::config[1060]: No configuration found. Mar 17 17:41:15.456397 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:41:15.456411 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:41:15.456423 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:41:15.456435 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:41:15.456448 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:41:15.456461 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:41:15.456476 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:41:15.456489 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:41:15.456507 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:41:15.456519 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:41:15.456532 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:41:15.456544 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:41:15.456556 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:41:15.456569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:41:15.456584 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:41:15.456596 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:41:15.456609 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:41:15.456622 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:41:15.456635 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:41:15.456647 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:41:15.456660 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:41:15.456672 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:41:15.456687 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:41:15.456702 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:41:15.456714 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:41:15.456726 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:41:15.456740 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:41:15.456755 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:41:15.456771 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:41:15.456786 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:41:15.456799 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:41:15.456814 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:41:15.456827 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:41:15.456840 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:41:15.456853 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:41:15.456866 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:41:15.456878 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:41:15.456892 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:15.456904 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:41:15.456919 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:41:15.456934 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:41:15.456953 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:41:15.456965 systemd[1]: Reached target machines.target - Containers. Mar 17 17:41:15.456980 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:41:15.456992 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:15.457004 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:41:15.457016 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:41:15.457029 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:15.457052 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:41:15.457144 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:15.457158 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:41:15.457171 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:15.457184 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:41:15.457196 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:41:15.457208 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:41:15.457220 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:41:15.457232 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:41:15.457248 kernel: fuse: init (API version 7.39) Mar 17 17:41:15.457260 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:41:15.457272 kernel: loop: module loaded Mar 17 17:41:15.457284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:41:15.457296 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:41:15.457321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:41:15.457338 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:41:15.457356 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:41:15.457370 systemd[1]: Stopped verity-setup.service. Mar 17 17:41:15.457386 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:15.457398 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:41:15.457430 systemd-journald[1127]: Collecting audit messages is disabled. Mar 17 17:41:15.457453 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:41:15.457465 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:41:15.457477 systemd-journald[1127]: Journal started Mar 17 17:41:15.457502 systemd-journald[1127]: Runtime Journal (/run/log/journal/d056b50e7a844ffab25f2d26e546dce1) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:41:15.201206 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:41:15.217776 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:41:15.218264 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:41:15.460883 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:41:15.462003 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:41:15.464141 kernel: ACPI: bus type drm_connector registered Mar 17 17:41:15.463763 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:41:15.465126 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:41:15.466533 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:41:15.468146 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:41:15.469643 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:41:15.469818 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:41:15.471357 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:15.471536 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:15.472989 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:41:15.473198 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:41:15.474839 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:15.475013 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:15.476666 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:41:15.476844 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:41:15.478281 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:15.478465 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:15.479930 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:41:15.481365 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:41:15.483027 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:41:15.496596 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:41:15.507208 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:41:15.510263 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:41:15.511561 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:41:15.511606 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:41:15.513685 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:41:15.516215 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:41:15.519060 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:41:15.520589 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:15.524237 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:41:15.526802 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:41:15.528223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:41:15.529937 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:41:15.531352 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:41:15.533229 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:41:15.539607 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:41:15.544527 systemd-journald[1127]: Time spent on flushing to /var/log/journal/d056b50e7a844ffab25f2d26e546dce1 is 46.483ms for 950 entries. Mar 17 17:41:15.544527 systemd-journald[1127]: System Journal (/var/log/journal/d056b50e7a844ffab25f2d26e546dce1) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:41:15.610270 systemd-journald[1127]: Received client request to flush runtime journal. Mar 17 17:41:15.610384 kernel: loop0: detected capacity change from 0 to 205544 Mar 17 17:41:15.552273 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:41:15.556897 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:41:15.559814 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:41:15.561707 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:41:15.565823 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:41:15.568848 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:41:15.581207 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:41:15.584678 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:41:15.594723 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:41:15.599274 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:41:15.608509 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:41:15.614496 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:41:15.617476 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:41:15.626112 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:41:15.631416 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:41:15.634149 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:41:15.636145 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:41:15.656506 kernel: loop1: detected capacity change from 0 to 140992 Mar 17 17:41:15.659522 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 17 17:41:15.659546 systemd-tmpfiles[1192]: ACLs are not supported, ignoring. Mar 17 17:41:15.666574 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:41:15.693328 kernel: loop2: detected capacity change from 0 to 138184 Mar 17 17:41:15.740129 kernel: loop3: detected capacity change from 0 to 205544 Mar 17 17:41:15.751103 kernel: loop4: detected capacity change from 0 to 140992 Mar 17 17:41:15.766099 kernel: loop5: detected capacity change from 0 to 138184 Mar 17 17:41:15.781574 (sd-merge)[1198]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:41:15.782272 (sd-merge)[1198]: Merged extensions into '/usr'. Mar 17 17:41:15.787337 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:41:15.787363 systemd[1]: Reloading... Mar 17 17:41:15.853100 zram_generator::config[1227]: No configuration found. Mar 17 17:41:15.929259 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:41:15.980818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:16.033382 systemd[1]: Reloading finished in 245 ms. Mar 17 17:41:16.070007 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:41:16.071819 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:41:16.089267 systemd[1]: Starting ensure-sysext.service... Mar 17 17:41:16.092029 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:41:16.099240 systemd[1]: Reloading requested from client PID 1261 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:41:16.099260 systemd[1]: Reloading... Mar 17 17:41:16.118873 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:41:16.119417 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:41:16.120817 systemd-tmpfiles[1262]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:41:16.121311 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 17 17:41:16.121416 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Mar 17 17:41:16.128795 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:41:16.128811 systemd-tmpfiles[1262]: Skipping /boot Mar 17 17:41:16.146944 systemd-tmpfiles[1262]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:41:16.147102 systemd-tmpfiles[1262]: Skipping /boot Mar 17 17:41:16.148147 zram_generator::config[1289]: No configuration found. Mar 17 17:41:16.284687 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:16.348021 systemd[1]: Reloading finished in 248 ms. Mar 17 17:41:16.383691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:41:16.410138 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:41:16.413129 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:41:16.415902 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:41:16.422044 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:41:16.425042 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:41:16.431640 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:16.431812 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:16.434012 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:16.439306 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:16.443992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:16.445486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:16.445808 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:16.447046 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:16.447330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:16.477216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:16.477699 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:16.483473 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:16.483722 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:16.487645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:16.487871 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:16.498419 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:16.545293 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:16.547641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:16.548800 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:16.551205 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:41:16.552264 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:16.553957 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:41:16.555783 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:41:16.557617 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:16.557847 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:16.559719 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:16.559949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:16.561705 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:16.561912 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:16.576579 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:16.576823 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:41:16.587464 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:41:16.591413 augenrules[1367]: No rules Mar 17 17:41:16.617047 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:41:16.620366 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:41:16.625659 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:41:16.627309 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:41:16.627499 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:41:16.629406 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:41:16.629770 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:41:16.631562 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:41:16.633462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:41:16.633643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:41:16.635306 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:41:16.635486 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:41:16.637157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:41:16.637381 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:41:16.639434 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:41:16.649411 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:41:16.649620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:41:16.654678 systemd[1]: Finished ensure-sysext.service. Mar 17 17:41:16.660534 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:41:16.660643 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:41:16.669418 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:41:16.670774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:41:16.733487 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:41:16.735149 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:41:16.746586 systemd-resolved[1330]: Positive Trust Anchors: Mar 17 17:41:16.746612 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:41:16.746644 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:41:16.751576 systemd-resolved[1330]: Defaulting to hostname 'linux'. Mar 17 17:41:16.753718 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:41:16.755280 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:41:16.815468 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:41:16.829469 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:41:16.832256 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:41:16.848758 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:41:16.854659 systemd-udevd[1390]: Using default interface naming scheme 'v255'. Mar 17 17:41:16.873788 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:41:16.885309 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:41:16.914095 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1397) Mar 17 17:41:16.949580 systemd-networkd[1400]: lo: Link UP Mar 17 17:41:16.949903 systemd-networkd[1400]: lo: Gained carrier Mar 17 17:41:16.952325 systemd-networkd[1400]: Enumeration completed Mar 17 17:41:16.952471 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:41:16.953179 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:16.953251 systemd-networkd[1400]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:41:16.958772 systemd-networkd[1400]: eth0: Link UP Mar 17 17:41:16.958830 systemd-networkd[1400]: eth0: Gained carrier Mar 17 17:41:16.958880 systemd-networkd[1400]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:41:16.958993 systemd[1]: Reached target network.target - Network. Mar 17 17:41:16.966324 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:41:16.967840 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:41:16.970513 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:41:16.974218 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:41:16.975182 systemd-networkd[1400]: eth0: DHCPv4 address 10.0.0.46/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:41:16.976291 systemd-timesyncd[1387]: Network configuration changed, trying to establish connection. Mar 17 17:41:16.978045 systemd-timesyncd[1387]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:41:16.978163 systemd-timesyncd[1387]: Initial clock synchronization to Mon 2025-03-17 17:41:17.101206 UTC. Mar 17 17:41:16.991810 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:41:17.002192 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:41:17.011135 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:41:17.014564 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:41:17.014938 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:41:17.015410 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:41:17.019169 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:41:17.044105 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:41:17.052327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:41:17.137024 kernel: kvm_amd: TSC scaling supported Mar 17 17:41:17.137125 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:41:17.137148 kernel: kvm_amd: Nested Paging enabled Mar 17 17:41:17.137190 kernel: kvm_amd: LBR virtualization supported Mar 17 17:41:17.137597 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:41:17.138571 kernel: kvm_amd: Virtual GIF supported Mar 17 17:41:17.162113 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:41:17.193407 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:41:17.201450 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:41:17.217397 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:41:17.227686 lvm[1436]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:41:17.259676 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:41:17.267454 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:41:17.268853 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:41:17.270299 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:41:17.271924 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:41:17.273851 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:41:17.277517 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:41:17.279106 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:41:17.280676 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:41:17.280718 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:41:17.281828 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:41:17.283917 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:41:17.287238 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:41:17.300368 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:41:17.303248 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:41:17.305059 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:41:17.306461 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:41:17.307639 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:41:17.308859 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:41:17.308889 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:41:17.310111 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:41:17.312513 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:41:17.317497 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:41:17.324286 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:41:17.324907 jq[1444]: false Mar 17 17:41:17.325494 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:41:17.326650 lvm[1441]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:41:17.329051 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:41:17.335187 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:41:17.339294 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:41:17.343791 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:41:17.345170 dbus-daemon[1443]: [system] SELinux support is enabled Mar 17 17:41:17.352279 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:41:17.354473 extend-filesystems[1445]: Found loop3 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found loop4 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found loop5 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found sr0 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda1 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda2 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda3 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found usr Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda4 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda6 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda7 Mar 17 17:41:17.354473 extend-filesystems[1445]: Found vda9 Mar 17 17:41:17.354473 extend-filesystems[1445]: Checking size of /dev/vda9 Mar 17 17:41:17.437056 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1398) Mar 17 17:41:17.437110 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:41:17.354214 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:41:17.437338 extend-filesystems[1445]: Resized partition /dev/vda9 Mar 17 17:41:17.355698 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:41:17.443600 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:41:17.357451 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:41:17.444530 update_engine[1459]: I20250317 17:41:17.415717 1459 main.cc:92] Flatcar Update Engine starting Mar 17 17:41:17.444530 update_engine[1459]: I20250317 17:41:17.441597 1459 update_check_scheduler.cc:74] Next update check in 10m42s Mar 17 17:41:17.366591 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:41:17.404189 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:41:17.444998 jq[1462]: true Mar 17 17:41:17.408289 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:41:17.419727 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:41:17.419982 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:41:17.420369 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:41:17.420586 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:41:17.442268 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:41:17.442503 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:41:17.470797 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:41:17.474431 jq[1470]: true Mar 17 17:41:17.483239 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:41:17.511610 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:41:17.512155 sshd_keygen[1463]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:41:17.511681 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:41:17.513152 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:41:17.513184 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:41:17.523337 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:41:17.540022 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:41:17.543480 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:41:17.552895 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:41:17.555275 systemd[1]: Started sshd@0-10.0.0.46:22-10.0.0.1:55526.service - OpenSSH per-connection server daemon (10.0.0.1:55526). Mar 17 17:41:17.556949 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:41:17.557239 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:41:17.568494 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:41:17.629126 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:41:17.644969 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:41:17.649164 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:41:17.652926 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:41:17.653971 systemd-logind[1456]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:41:17.654018 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:41:17.654395 systemd-logind[1456]: New seat seat0. Mar 17 17:41:17.655688 tar[1468]: linux-amd64/helm Mar 17 17:41:17.656040 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:41:17.668250 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:41:17.668595 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:41:17.729808 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:41:17.729808 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:41:17.729808 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:41:17.735333 extend-filesystems[1445]: Resized filesystem in /dev/vda9 Mar 17 17:41:17.736891 sshd[1509]: Connection closed by authenticating user core 10.0.0.1 port 55526 [preauth] Mar 17 17:41:17.730717 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:41:17.731023 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:41:17.738230 systemd[1]: sshd@0-10.0.0.46:22-10.0.0.1:55526.service: Deactivated successfully. Mar 17 17:41:17.754551 bash[1495]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:41:17.755854 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:41:17.759602 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:41:17.844949 containerd[1471]: time="2025-03-17T17:41:17.844832094Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:41:17.871721 containerd[1471]: time="2025-03-17T17:41:17.871635680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874263 containerd[1471]: time="2025-03-17T17:41:17.874187160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874263 containerd[1471]: time="2025-03-17T17:41:17.874243517Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:41:17.874263 containerd[1471]: time="2025-03-17T17:41:17.874270160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:41:17.874560 containerd[1471]: time="2025-03-17T17:41:17.874522561Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:41:17.874560 containerd[1471]: time="2025-03-17T17:41:17.874549599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874660 containerd[1471]: time="2025-03-17T17:41:17.874633304Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874660 containerd[1471]: time="2025-03-17T17:41:17.874651165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874885 containerd[1471]: time="2025-03-17T17:41:17.874857498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874885 containerd[1471]: time="2025-03-17T17:41:17.874875094Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874968 containerd[1471]: time="2025-03-17T17:41:17.874900264Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:17.874968 containerd[1471]: time="2025-03-17T17:41:17.874912187Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:17.875053 containerd[1471]: time="2025-03-17T17:41:17.875027424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:17.875451 containerd[1471]: time="2025-03-17T17:41:17.875379927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:41:17.875576 containerd[1471]: time="2025-03-17T17:41:17.875547572Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:41:17.875576 containerd[1471]: time="2025-03-17T17:41:17.875569722Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:41:17.875715 containerd[1471]: time="2025-03-17T17:41:17.875688653Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:41:17.875795 containerd[1471]: time="2025-03-17T17:41:17.875771037Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:41:17.886112 containerd[1471]: time="2025-03-17T17:41:17.886042787Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:41:17.886185 containerd[1471]: time="2025-03-17T17:41:17.886129743Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:41:17.886185 containerd[1471]: time="2025-03-17T17:41:17.886147936Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:41:17.886185 containerd[1471]: time="2025-03-17T17:41:17.886164888Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:41:17.886185 containerd[1471]: time="2025-03-17T17:41:17.886179678Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:41:17.886388 containerd[1471]: time="2025-03-17T17:41:17.886368343Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:41:17.886640 containerd[1471]: time="2025-03-17T17:41:17.886611032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:41:17.886757 containerd[1471]: time="2025-03-17T17:41:17.886737131Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:41:17.886801 containerd[1471]: time="2025-03-17T17:41:17.886758009Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:41:17.886801 containerd[1471]: time="2025-03-17T17:41:17.886771660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:41:17.886801 containerd[1471]: time="2025-03-17T17:41:17.886784724Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886801 containerd[1471]: time="2025-03-17T17:41:17.886798041Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886872 containerd[1471]: time="2025-03-17T17:41:17.886810226Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886872 containerd[1471]: time="2025-03-17T17:41:17.886825219Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886872 containerd[1471]: time="2025-03-17T17:41:17.886838253Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886872 containerd[1471]: time="2025-03-17T17:41:17.886850176Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886872 containerd[1471]: time="2025-03-17T17:41:17.886865634Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886963 containerd[1471]: time="2025-03-17T17:41:17.886877748Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:41:17.886963 containerd[1471]: time="2025-03-17T17:41:17.886904564Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.886963 containerd[1471]: time="2025-03-17T17:41:17.886922908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.886963 containerd[1471]: time="2025-03-17T17:41:17.886939971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.886963 containerd[1471]: time="2025-03-17T17:41:17.886952661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.886963 containerd[1471]: time="2025-03-17T17:41:17.886964928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.886979194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.886990986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.887003515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.887015872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.887042577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.887054278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.887066888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887099 containerd[1471]: time="2025-03-17T17:41:17.887097943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887252 containerd[1471]: time="2025-03-17T17:41:17.887116268Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:41:17.887252 containerd[1471]: time="2025-03-17T17:41:17.887141598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887252 containerd[1471]: time="2025-03-17T17:41:17.887154410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887252 containerd[1471]: time="2025-03-17T17:41:17.887165607Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:41:17.887252 containerd[1471]: time="2025-03-17T17:41:17.887216410Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:41:17.887252 containerd[1471]: time="2025-03-17T17:41:17.887234643Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:41:17.887252 containerd[1471]: time="2025-03-17T17:41:17.887245820Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:41:17.887383 containerd[1471]: time="2025-03-17T17:41:17.887256764Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:41:17.887383 containerd[1471]: time="2025-03-17T17:41:17.887267375Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887383 containerd[1471]: time="2025-03-17T17:41:17.887279742Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:41:17.887383 containerd[1471]: time="2025-03-17T17:41:17.887289566Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:41:17.887383 containerd[1471]: time="2025-03-17T17:41:17.887299016Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:41:17.887610 containerd[1471]: time="2025-03-17T17:41:17.887566248Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:41:17.887767 containerd[1471]: time="2025-03-17T17:41:17.887611993Z" level=info msg="Connect containerd service" Mar 17 17:41:17.887767 containerd[1471]: time="2025-03-17T17:41:17.887641443Z" level=info msg="using legacy CRI server" Mar 17 17:41:17.887767 containerd[1471]: time="2025-03-17T17:41:17.887648924Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:41:17.887767 containerd[1471]: time="2025-03-17T17:41:17.887754216Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:41:17.889881 containerd[1471]: time="2025-03-17T17:41:17.889853829Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:41:17.890100 containerd[1471]: time="2025-03-17T17:41:17.890019100Z" level=info msg="Start subscribing containerd event" Mar 17 17:41:17.890219 containerd[1471]: time="2025-03-17T17:41:17.890169147Z" level=info msg="Start recovering state" Mar 17 17:41:17.890313 containerd[1471]: time="2025-03-17T17:41:17.890263263Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:41:17.890351 containerd[1471]: time="2025-03-17T17:41:17.890331593Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:41:17.890421 containerd[1471]: time="2025-03-17T17:41:17.890399821Z" level=info msg="Start event monitor" Mar 17 17:41:17.890466 containerd[1471]: time="2025-03-17T17:41:17.890426304Z" level=info msg="Start snapshots syncer" Mar 17 17:41:17.890466 containerd[1471]: time="2025-03-17T17:41:17.890437622Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:41:17.890466 containerd[1471]: time="2025-03-17T17:41:17.890445930Z" level=info msg="Start streaming server" Mar 17 17:41:17.890558 containerd[1471]: time="2025-03-17T17:41:17.890543831Z" level=info msg="containerd successfully booted in 0.047529s" Mar 17 17:41:17.892210 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:41:18.083840 tar[1468]: linux-amd64/LICENSE Mar 17 17:41:18.083966 tar[1468]: linux-amd64/README.md Mar 17 17:41:18.105388 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:41:18.343688 systemd-networkd[1400]: eth0: Gained IPv6LL Mar 17 17:41:18.347522 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:41:18.349730 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:41:18.360500 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:41:18.363445 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:18.366525 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:41:18.392410 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:41:18.392985 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:41:18.394830 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:41:18.397169 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:41:19.382995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:19.385101 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:41:19.387171 systemd[1]: Startup finished in 861ms (kernel) + 6.819s (initrd) + 4.876s (userspace) = 12.558s. Mar 17 17:41:19.410869 (kubelet)[1562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:20.120333 kubelet[1562]: E0317 17:41:20.120258 1562 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:20.125679 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:20.125946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:20.126484 systemd[1]: kubelet.service: Consumed 1.589s CPU time. Mar 17 17:41:27.807033 systemd[1]: Started sshd@1-10.0.0.46:22-10.0.0.1:41350.service - OpenSSH per-connection server daemon (10.0.0.1:41350). Mar 17 17:41:27.850827 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 41350 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:27.853154 sshd-session[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:27.861551 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:41:27.874511 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:41:27.876754 systemd-logind[1456]: New session 1 of user core. Mar 17 17:41:27.891974 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:41:27.901349 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:41:27.904653 (systemd)[1579]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:41:28.027407 systemd[1579]: Queued start job for default target default.target. Mar 17 17:41:28.037692 systemd[1579]: Created slice app.slice - User Application Slice. Mar 17 17:41:28.037721 systemd[1579]: Reached target paths.target - Paths. Mar 17 17:41:28.037736 systemd[1579]: Reached target timers.target - Timers. Mar 17 17:41:28.039658 systemd[1579]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:41:28.055224 systemd[1579]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:41:28.055362 systemd[1579]: Reached target sockets.target - Sockets. Mar 17 17:41:28.055381 systemd[1579]: Reached target basic.target - Basic System. Mar 17 17:41:28.055420 systemd[1579]: Reached target default.target - Main User Target. Mar 17 17:41:28.055455 systemd[1579]: Startup finished in 143ms. Mar 17 17:41:28.056229 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:41:28.058390 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:41:28.120996 systemd[1]: Started sshd@2-10.0.0.46:22-10.0.0.1:41360.service - OpenSSH per-connection server daemon (10.0.0.1:41360). Mar 17 17:41:28.176454 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 41360 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:28.177944 sshd-session[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:28.181995 systemd-logind[1456]: New session 2 of user core. Mar 17 17:41:28.194266 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:41:28.249169 sshd[1592]: Connection closed by 10.0.0.1 port 41360 Mar 17 17:41:28.249589 sshd-session[1590]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:28.266047 systemd[1]: sshd@2-10.0.0.46:22-10.0.0.1:41360.service: Deactivated successfully. Mar 17 17:41:28.267902 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:41:28.269634 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:41:28.270871 systemd[1]: Started sshd@3-10.0.0.46:22-10.0.0.1:41374.service - OpenSSH per-connection server daemon (10.0.0.1:41374). Mar 17 17:41:28.271791 systemd-logind[1456]: Removed session 2. Mar 17 17:41:28.312850 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 41374 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:28.314347 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:28.318516 systemd-logind[1456]: New session 3 of user core. Mar 17 17:41:28.329234 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:41:28.378886 sshd[1599]: Connection closed by 10.0.0.1 port 41374 Mar 17 17:41:28.379370 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:28.396932 systemd[1]: sshd@3-10.0.0.46:22-10.0.0.1:41374.service: Deactivated successfully. Mar 17 17:41:28.398866 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:41:28.400553 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:41:28.401931 systemd[1]: Started sshd@4-10.0.0.46:22-10.0.0.1:41380.service - OpenSSH per-connection server daemon (10.0.0.1:41380). Mar 17 17:41:28.402827 systemd-logind[1456]: Removed session 3. Mar 17 17:41:28.445613 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 41380 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:28.447163 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:28.451232 systemd-logind[1456]: New session 4 of user core. Mar 17 17:41:28.461206 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:41:28.515838 sshd[1606]: Connection closed by 10.0.0.1 port 41380 Mar 17 17:41:28.516318 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:28.528683 systemd[1]: sshd@4-10.0.0.46:22-10.0.0.1:41380.service: Deactivated successfully. Mar 17 17:41:28.530515 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:41:28.532014 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:41:28.546396 systemd[1]: Started sshd@5-10.0.0.46:22-10.0.0.1:41396.service - OpenSSH per-connection server daemon (10.0.0.1:41396). Mar 17 17:41:28.547518 systemd-logind[1456]: Removed session 4. Mar 17 17:41:28.582723 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 41396 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:28.584259 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:28.588655 systemd-logind[1456]: New session 5 of user core. Mar 17 17:41:28.599233 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:41:28.656588 sudo[1614]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:41:28.656970 sudo[1614]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:41:28.674255 sudo[1614]: pam_unix(sudo:session): session closed for user root Mar 17 17:41:28.675655 sshd[1613]: Connection closed by 10.0.0.1 port 41396 Mar 17 17:41:28.676063 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:28.691423 systemd[1]: sshd@5-10.0.0.46:22-10.0.0.1:41396.service: Deactivated successfully. Mar 17 17:41:28.693617 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:41:28.695451 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:41:28.706386 systemd[1]: Started sshd@6-10.0.0.46:22-10.0.0.1:41404.service - OpenSSH per-connection server daemon (10.0.0.1:41404). Mar 17 17:41:28.707432 systemd-logind[1456]: Removed session 5. Mar 17 17:41:28.746803 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 41404 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:28.748336 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:28.752206 systemd-logind[1456]: New session 6 of user core. Mar 17 17:41:28.763299 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:41:28.817581 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:41:28.817933 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:41:28.822170 sudo[1623]: pam_unix(sudo:session): session closed for user root Mar 17 17:41:28.828513 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:41:28.828857 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:41:28.852571 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:41:28.883183 augenrules[1645]: No rules Mar 17 17:41:28.884230 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:41:28.884543 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:41:28.885761 sudo[1622]: pam_unix(sudo:session): session closed for user root Mar 17 17:41:28.887380 sshd[1621]: Connection closed by 10.0.0.1 port 41404 Mar 17 17:41:28.887699 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Mar 17 17:41:28.899136 systemd[1]: sshd@6-10.0.0.46:22-10.0.0.1:41404.service: Deactivated successfully. Mar 17 17:41:28.900715 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:41:28.902186 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:41:28.913292 systemd[1]: Started sshd@7-10.0.0.46:22-10.0.0.1:41420.service - OpenSSH per-connection server daemon (10.0.0.1:41420). Mar 17 17:41:28.914118 systemd-logind[1456]: Removed session 6. Mar 17 17:41:28.954822 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 41420 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:41:28.957184 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:41:28.961329 systemd-logind[1456]: New session 7 of user core. Mar 17 17:41:28.971189 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:41:29.028786 sudo[1656]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:41:29.029329 sudo[1656]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:41:29.304486 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:41:29.304668 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:41:29.553211 dockerd[1676]: time="2025-03-17T17:41:29.553128170Z" level=info msg="Starting up" Mar 17 17:41:29.631802 systemd[1]: var-lib-docker-metacopy\x2dcheck642396024-merged.mount: Deactivated successfully. Mar 17 17:41:29.657670 dockerd[1676]: time="2025-03-17T17:41:29.657621072Z" level=info msg="Loading containers: start." Mar 17 17:41:29.844091 kernel: Initializing XFRM netlink socket Mar 17 17:41:29.934041 systemd-networkd[1400]: docker0: Link UP Mar 17 17:41:29.975873 dockerd[1676]: time="2025-03-17T17:41:29.975833366Z" level=info msg="Loading containers: done." Mar 17 17:41:29.991631 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck240623959-merged.mount: Deactivated successfully. Mar 17 17:41:29.994968 dockerd[1676]: time="2025-03-17T17:41:29.994909659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:41:29.995105 dockerd[1676]: time="2025-03-17T17:41:29.995015049Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:41:29.995168 dockerd[1676]: time="2025-03-17T17:41:29.995140970Z" level=info msg="Daemon has completed initialization" Mar 17 17:41:30.034082 dockerd[1676]: time="2025-03-17T17:41:30.034002784Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:41:30.035438 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:41:30.376182 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:41:30.386246 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:30.533033 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:30.537881 (kubelet)[1881]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:30.579426 kubelet[1881]: E0317 17:41:30.579311 1881 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:30.586062 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:30.586295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:30.801802 containerd[1471]: time="2025-03-17T17:41:30.801680065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\"" Mar 17 17:41:31.984304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount649084873.mount: Deactivated successfully. Mar 17 17:41:33.336436 containerd[1471]: time="2025-03-17T17:41:33.336362318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:33.337142 containerd[1471]: time="2025-03-17T17:41:33.337096742Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.7: active requests=0, bytes read=27959268" Mar 17 17:41:33.338369 containerd[1471]: time="2025-03-17T17:41:33.338332587Z" level=info msg="ImageCreate event name:\"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:33.343683 containerd[1471]: time="2025-03-17T17:41:33.343625536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:33.344697 containerd[1471]: time="2025-03-17T17:41:33.344653554Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.7\" with image id \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:22c19cc70fe5806d0a2cb28a6b6b33fd34e6f9e50616bdf6d53649bcfafbc277\", size \"27956068\" in 2.5429303s" Mar 17 17:41:33.344697 containerd[1471]: time="2025-03-17T17:41:33.344688807Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.7\" returns image reference \"sha256:f084bc047a8cf7c8484d47c51e70e646dde3977d916f282feb99207b7b9241af\"" Mar 17 17:41:33.346199 containerd[1471]: time="2025-03-17T17:41:33.346138275Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\"" Mar 17 17:41:35.759996 containerd[1471]: time="2025-03-17T17:41:35.759905853Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:35.761040 containerd[1471]: time="2025-03-17T17:41:35.760942980Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.7: active requests=0, bytes read=24713776" Mar 17 17:41:35.762338 containerd[1471]: time="2025-03-17T17:41:35.762269504Z" level=info msg="ImageCreate event name:\"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:35.765432 containerd[1471]: time="2025-03-17T17:41:35.765389410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:35.766725 containerd[1471]: time="2025-03-17T17:41:35.766680925Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.7\" with image id \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6abe7a0accecf29db6ebab18a10f844678ffed693d79e2e51a18a6f2b4530cbb\", size \"26201384\" in 2.420507352s" Mar 17 17:41:35.766792 containerd[1471]: time="2025-03-17T17:41:35.766723869Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.7\" returns image reference \"sha256:652dcad615a9a0c252c253860d5b5b7bfebd3efe159dc033a8555bc15a6d1985\"" Mar 17 17:41:35.767649 containerd[1471]: time="2025-03-17T17:41:35.767608122Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\"" Mar 17 17:41:38.304684 containerd[1471]: time="2025-03-17T17:41:38.303829773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:38.317162 containerd[1471]: time="2025-03-17T17:41:38.317029065Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.7: active requests=0, bytes read=18780368" Mar 17 17:41:38.349429 containerd[1471]: time="2025-03-17T17:41:38.349355145Z" level=info msg="ImageCreate event name:\"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:38.363785 containerd[1471]: time="2025-03-17T17:41:38.363632383Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:38.364836 containerd[1471]: time="2025-03-17T17:41:38.364772950Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.7\" with image id \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:fb80249bcb77ee72b1c9fa5b70bc28a83ed107c9ca71957841ad91db379963bf\", size \"20267994\" in 2.597129304s" Mar 17 17:41:38.364836 containerd[1471]: time="2025-03-17T17:41:38.364822130Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.7\" returns image reference \"sha256:7f1f6a63d8aa14cf61d0045e912ad312b4ade24637cecccc933b163582eae68c\"" Mar 17 17:41:38.365490 containerd[1471]: time="2025-03-17T17:41:38.365453927Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\"" Mar 17 17:41:40.838137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:41:40.853587 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:41.063784 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:41.071227 (kubelet)[1962]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:41:41.267725 kubelet[1962]: E0317 17:41:41.267505 1962 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:41:41.272524 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:41:41.272780 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:41:43.005377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3791506577.mount: Deactivated successfully. Mar 17 17:41:43.416777 containerd[1471]: time="2025-03-17T17:41:43.416631491Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:43.417792 containerd[1471]: time="2025-03-17T17:41:43.417733128Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.7: active requests=0, bytes read=30354630" Mar 17 17:41:43.419334 containerd[1471]: time="2025-03-17T17:41:43.419290392Z" level=info msg="ImageCreate event name:\"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:43.421545 containerd[1471]: time="2025-03-17T17:41:43.421505826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:43.422195 containerd[1471]: time="2025-03-17T17:41:43.422152888Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.7\" with image id \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\", repo tag \"registry.k8s.io/kube-proxy:v1.31.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:e5839270c96c3ad1bea1dce4935126d3281297527f3655408d2970aa4b5cf178\", size \"30353649\" in 5.056665929s" Mar 17 17:41:43.422195 containerd[1471]: time="2025-03-17T17:41:43.422187232Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.7\" returns image reference \"sha256:dcfc039c372ea285997a302d60e58a75b80905b4c4dba969993b9b22e8ac66d1\"" Mar 17 17:41:43.422658 containerd[1471]: time="2025-03-17T17:41:43.422634933Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:41:43.967826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2477053852.mount: Deactivated successfully. Mar 17 17:41:45.362487 containerd[1471]: time="2025-03-17T17:41:45.362398292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:45.367034 containerd[1471]: time="2025-03-17T17:41:45.366933377Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:41:45.370220 containerd[1471]: time="2025-03-17T17:41:45.370168013Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:45.375846 containerd[1471]: time="2025-03-17T17:41:45.375734643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:45.377293 containerd[1471]: time="2025-03-17T17:41:45.377205337Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.954530183s" Mar 17 17:41:45.377293 containerd[1471]: time="2025-03-17T17:41:45.377284595Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:41:45.378000 containerd[1471]: time="2025-03-17T17:41:45.377963344Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 17 17:41:46.000444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount774120730.mount: Deactivated successfully. Mar 17 17:41:46.010629 containerd[1471]: time="2025-03-17T17:41:46.010555026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:46.011711 containerd[1471]: time="2025-03-17T17:41:46.011649923Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 17 17:41:46.013278 containerd[1471]: time="2025-03-17T17:41:46.013215441Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:46.016978 containerd[1471]: time="2025-03-17T17:41:46.016929748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:46.017744 containerd[1471]: time="2025-03-17T17:41:46.017692400Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 639.680976ms" Mar 17 17:41:46.017744 containerd[1471]: time="2025-03-17T17:41:46.017728751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 17 17:41:46.018312 containerd[1471]: time="2025-03-17T17:41:46.018282274Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Mar 17 17:41:46.962521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079064584.mount: Deactivated successfully. Mar 17 17:41:48.831299 containerd[1471]: time="2025-03-17T17:41:48.831213284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:48.832190 containerd[1471]: time="2025-03-17T17:41:48.832116617Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779973" Mar 17 17:41:48.834281 containerd[1471]: time="2025-03-17T17:41:48.834225244Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:48.838820 containerd[1471]: time="2025-03-17T17:41:48.838765508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:41:48.840477 containerd[1471]: time="2025-03-17T17:41:48.840437582Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.82212321s" Mar 17 17:41:48.840528 containerd[1471]: time="2025-03-17T17:41:48.840475904Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Mar 17 17:41:51.278036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 17 17:41:51.289362 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:51.309361 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 17:41:51.309492 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 17:41:51.309883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:51.325381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:51.350874 systemd[1]: Reloading requested from client PID 2115 ('systemctl') (unit session-7.scope)... Mar 17 17:41:51.350890 systemd[1]: Reloading... Mar 17 17:41:51.433110 zram_generator::config[2154]: No configuration found. Mar 17 17:41:52.065023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:41:52.146372 systemd[1]: Reloading finished in 795 ms. Mar 17 17:41:52.213260 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:52.219093 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:41:52.219463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:52.237466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:41:52.396796 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:41:52.403123 (kubelet)[2204]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:41:52.451982 kubelet[2204]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:41:52.451982 kubelet[2204]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:41:52.451982 kubelet[2204]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:41:52.452443 kubelet[2204]: I0317 17:41:52.452041 2204 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:41:52.667761 kubelet[2204]: I0317 17:41:52.667587 2204 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:41:52.667761 kubelet[2204]: I0317 17:41:52.667621 2204 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:41:52.667937 kubelet[2204]: I0317 17:41:52.667870 2204 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:41:52.691878 kubelet[2204]: E0317 17:41:52.691818 2204 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:52.692338 kubelet[2204]: I0317 17:41:52.692231 2204 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:41:52.704149 kubelet[2204]: E0317 17:41:52.704093 2204 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:41:52.704149 kubelet[2204]: I0317 17:41:52.704138 2204 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:41:52.712671 kubelet[2204]: I0317 17:41:52.712606 2204 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:41:52.713896 kubelet[2204]: I0317 17:41:52.713856 2204 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:41:52.714199 kubelet[2204]: I0317 17:41:52.714145 2204 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:41:52.714406 kubelet[2204]: I0317 17:41:52.714189 2204 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:41:52.714526 kubelet[2204]: I0317 17:41:52.714415 2204 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:41:52.714526 kubelet[2204]: I0317 17:41:52.714429 2204 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:41:52.714601 kubelet[2204]: I0317 17:41:52.714581 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:41:52.716607 kubelet[2204]: I0317 17:41:52.716559 2204 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:41:52.716607 kubelet[2204]: I0317 17:41:52.716588 2204 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:41:52.716705 kubelet[2204]: I0317 17:41:52.716635 2204 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:41:52.716705 kubelet[2204]: I0317 17:41:52.716656 2204 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:41:52.721420 kubelet[2204]: W0317 17:41:52.721239 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:52.721420 kubelet[2204]: E0317 17:41:52.721302 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:52.722499 kubelet[2204]: W0317 17:41:52.722452 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:52.722550 kubelet[2204]: E0317 17:41:52.722496 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:52.725419 kubelet[2204]: I0317 17:41:52.725386 2204 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:41:52.727822 kubelet[2204]: I0317 17:41:52.727792 2204 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:41:52.728435 kubelet[2204]: W0317 17:41:52.728401 2204 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:41:52.729300 kubelet[2204]: I0317 17:41:52.729110 2204 server.go:1269] "Started kubelet" Mar 17 17:41:52.729722 kubelet[2204]: I0317 17:41:52.729536 2204 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:41:52.730819 kubelet[2204]: I0317 17:41:52.730000 2204 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:41:52.730819 kubelet[2204]: I0317 17:41:52.730021 2204 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:41:52.730819 kubelet[2204]: I0317 17:41:52.730640 2204 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:41:52.731399 kubelet[2204]: I0317 17:41:52.731378 2204 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:41:52.731711 kubelet[2204]: I0317 17:41:52.731685 2204 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:41:52.734138 kubelet[2204]: I0317 17:41:52.734048 2204 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:41:52.735100 kubelet[2204]: I0317 17:41:52.734190 2204 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:41:52.735100 kubelet[2204]: I0317 17:41:52.734248 2204 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:41:52.735100 kubelet[2204]: W0317 17:41:52.734627 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:52.735100 kubelet[2204]: E0317 17:41:52.734675 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:52.735100 kubelet[2204]: E0317 17:41:52.734900 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:52.735100 kubelet[2204]: E0317 17:41:52.734983 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="200ms" Mar 17 17:41:52.735348 kubelet[2204]: E0317 17:41:52.735172 2204 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:41:52.735480 kubelet[2204]: I0317 17:41:52.735454 2204 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:41:52.735480 kubelet[2204]: I0317 17:41:52.735473 2204 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:41:52.735590 kubelet[2204]: I0317 17:41:52.735544 2204 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:41:52.737378 kubelet[2204]: E0317 17:41:52.735128 2204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da7f79911b465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:41:52.729085029 +0000 UTC m=+0.321648872,LastTimestamp:2025-03-17 17:41:52.729085029 +0000 UTC m=+0.321648872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:41:52.752901 kubelet[2204]: I0317 17:41:52.751802 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:41:52.753089 kubelet[2204]: I0317 17:41:52.753054 2204 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:41:52.753089 kubelet[2204]: I0317 17:41:52.753086 2204 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:41:52.753167 kubelet[2204]: I0317 17:41:52.753103 2204 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:41:52.753442 kubelet[2204]: I0317 17:41:52.753412 2204 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:41:52.753494 kubelet[2204]: I0317 17:41:52.753456 2204 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:41:52.753494 kubelet[2204]: I0317 17:41:52.753481 2204 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:41:52.753555 kubelet[2204]: E0317 17:41:52.753538 2204 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:41:52.754795 kubelet[2204]: W0317 17:41:52.754741 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:52.754841 kubelet[2204]: E0317 17:41:52.754793 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:52.835098 kubelet[2204]: E0317 17:41:52.834958 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:52.854266 kubelet[2204]: E0317 17:41:52.854193 2204 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:41:52.935961 kubelet[2204]: E0317 17:41:52.935779 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:52.936219 kubelet[2204]: E0317 17:41:52.936183 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="400ms" Mar 17 17:41:53.036674 kubelet[2204]: E0317 17:41:53.036614 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.055113 kubelet[2204]: E0317 17:41:53.055009 2204 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:41:53.137784 kubelet[2204]: E0317 17:41:53.137657 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.238754 kubelet[2204]: E0317 17:41:53.238601 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.337628 kubelet[2204]: E0317 17:41:53.337552 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="800ms" Mar 17 17:41:53.339712 kubelet[2204]: E0317 17:41:53.339668 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.440552 kubelet[2204]: E0317 17:41:53.440459 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.455845 kubelet[2204]: E0317 17:41:53.455746 2204 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:41:53.541770 kubelet[2204]: E0317 17:41:53.541533 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.642514 kubelet[2204]: E0317 17:41:53.642391 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.697602 kubelet[2204]: W0317 17:41:53.697509 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:53.697602 kubelet[2204]: E0317 17:41:53.697585 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:53.743455 kubelet[2204]: E0317 17:41:53.743388 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.844436 kubelet[2204]: E0317 17:41:53.844267 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.892033 kubelet[2204]: W0317 17:41:53.891982 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:53.892219 kubelet[2204]: E0317 17:41:53.892044 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:53.945111 kubelet[2204]: E0317 17:41:53.945019 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:53.956915 kubelet[2204]: W0317 17:41:53.956812 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:53.956915 kubelet[2204]: E0317 17:41:53.956909 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:53.957176 kubelet[2204]: I0317 17:41:53.957107 2204 policy_none.go:49] "None policy: Start" Mar 17 17:41:53.958386 kubelet[2204]: I0317 17:41:53.958357 2204 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:41:53.958462 kubelet[2204]: I0317 17:41:53.958401 2204 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:41:54.045292 kubelet[2204]: E0317 17:41:54.045227 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:54.138344 kubelet[2204]: E0317 17:41:54.138203 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="1.6s" Mar 17 17:41:54.145526 kubelet[2204]: E0317 17:41:54.145469 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:54.246095 kubelet[2204]: E0317 17:41:54.246019 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:54.256277 kubelet[2204]: E0317 17:41:54.256212 2204 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:41:54.332206 kubelet[2204]: W0317 17:41:54.332143 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:54.332206 kubelet[2204]: E0317 17:41:54.332203 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:54.346992 kubelet[2204]: E0317 17:41:54.346944 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:54.351691 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:41:54.367137 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:41:54.371671 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:41:54.382724 kubelet[2204]: I0317 17:41:54.382511 2204 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:41:54.382883 kubelet[2204]: I0317 17:41:54.382818 2204 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:41:54.382883 kubelet[2204]: I0317 17:41:54.382831 2204 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:41:54.383489 kubelet[2204]: I0317 17:41:54.383094 2204 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:41:54.384553 kubelet[2204]: E0317 17:41:54.384501 2204 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:41:54.484775 kubelet[2204]: I0317 17:41:54.484712 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:41:54.485277 kubelet[2204]: E0317 17:41:54.485105 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Mar 17 17:41:54.686813 kubelet[2204]: I0317 17:41:54.686738 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:41:54.687150 kubelet[2204]: E0317 17:41:54.687119 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Mar 17 17:41:54.859726 kubelet[2204]: E0317 17:41:54.859579 2204 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:55.088870 kubelet[2204]: I0317 17:41:55.088817 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:41:55.089185 kubelet[2204]: E0317 17:41:55.089156 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Mar 17 17:41:55.639599 kubelet[2204]: E0317 17:41:55.639375 2204 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da7f79911b465 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:41:52.729085029 +0000 UTC m=+0.321648872,LastTimestamp:2025-03-17 17:41:52.729085029 +0000 UTC m=+0.321648872,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:41:55.671371 kubelet[2204]: W0317 17:41:55.671283 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:55.671371 kubelet[2204]: E0317 17:41:55.671354 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:55.739020 kubelet[2204]: E0317 17:41:55.738932 2204 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.46:6443: connect: connection refused" interval="3.2s" Mar 17 17:41:55.868733 systemd[1]: Created slice kubepods-burstable-pod1038cd261a55920d8c090f4c59e6a888.slice - libcontainer container kubepods-burstable-pod1038cd261a55920d8c090f4c59e6a888.slice. Mar 17 17:41:55.881837 systemd[1]: Created slice kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice - libcontainer container kubepods-burstable-pod60762308083b5ef6c837b1be48ec53d6.slice. Mar 17 17:41:55.891254 kubelet[2204]: I0317 17:41:55.891128 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:41:55.891506 kubelet[2204]: E0317 17:41:55.891475 2204 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.46:6443/api/v1/nodes\": dial tcp 10.0.0.46:6443: connect: connection refused" node="localhost" Mar 17 17:41:55.901772 systemd[1]: Created slice kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice - libcontainer container kubepods-burstable-pod6f32907a07e55aea05abdc5cd284a8d5.slice. Mar 17 17:41:55.954359 kubelet[2204]: I0317 17:41:55.954271 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:41:55.954359 kubelet[2204]: I0317 17:41:55.954341 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1038cd261a55920d8c090f4c59e6a888-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1038cd261a55920d8c090f4c59e6a888\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:55.954359 kubelet[2204]: I0317 17:41:55.954364 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1038cd261a55920d8c090f4c59e6a888-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1038cd261a55920d8c090f4c59e6a888\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:55.954633 kubelet[2204]: I0317 17:41:55.954385 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1038cd261a55920d8c090f4c59e6a888-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1038cd261a55920d8c090f4c59e6a888\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:41:55.954633 kubelet[2204]: I0317 17:41:55.954411 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:55.954633 kubelet[2204]: I0317 17:41:55.954430 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:55.954633 kubelet[2204]: I0317 17:41:55.954451 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:55.954633 kubelet[2204]: I0317 17:41:55.954477 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:55.954811 kubelet[2204]: I0317 17:41:55.954499 2204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:41:56.179623 kubelet[2204]: E0317 17:41:56.179474 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:56.180211 containerd[1471]: time="2025-03-17T17:41:56.180164399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1038cd261a55920d8c090f4c59e6a888,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:56.200260 kubelet[2204]: E0317 17:41:56.200197 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:56.200819 containerd[1471]: time="2025-03-17T17:41:56.200780501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:56.205186 kubelet[2204]: E0317 17:41:56.205154 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:56.205586 containerd[1471]: time="2025-03-17T17:41:56.205552841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,}" Mar 17 17:41:56.584492 kubelet[2204]: W0317 17:41:56.584393 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:56.584492 kubelet[2204]: E0317 17:41:56.584469 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:56.711579 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175486272.mount: Deactivated successfully. Mar 17 17:41:56.718670 containerd[1471]: time="2025-03-17T17:41:56.718607257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:56.721795 containerd[1471]: time="2025-03-17T17:41:56.721743465Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:41:56.722937 containerd[1471]: time="2025-03-17T17:41:56.722874297Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:56.725853 containerd[1471]: time="2025-03-17T17:41:56.725745323Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:56.726730 containerd[1471]: time="2025-03-17T17:41:56.726679193Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:41:56.727863 containerd[1471]: time="2025-03-17T17:41:56.727815830Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:56.728970 containerd[1471]: time="2025-03-17T17:41:56.728863635Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:41:56.729878 containerd[1471]: time="2025-03-17T17:41:56.729836754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:41:56.730774 containerd[1471]: time="2025-03-17T17:41:56.730745121Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.489224ms" Mar 17 17:41:56.734570 containerd[1471]: time="2025-03-17T17:41:56.734536403Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 533.657216ms" Mar 17 17:41:56.740383 containerd[1471]: time="2025-03-17T17:41:56.740290764Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 534.657823ms" Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.851354878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854287116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854304680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854407366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854360459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854416318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854427416Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854652137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854745710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854765390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.854993 containerd[1471]: time="2025-03-17T17:41:56.854874873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.855497 containerd[1471]: time="2025-03-17T17:41:56.855444184Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:41:56.876293 systemd[1]: Started cri-containerd-985d27de4437cae386ab7af41b6955692119b96131c932c2c7f303b32c4054c9.scope - libcontainer container 985d27de4437cae386ab7af41b6955692119b96131c932c2c7f303b32c4054c9. Mar 17 17:41:56.881606 systemd[1]: Started cri-containerd-2c51bb5740d8b7052bbe2f0faaeb89c55000e24be8125b33e3a64ecfa4f9b88c.scope - libcontainer container 2c51bb5740d8b7052bbe2f0faaeb89c55000e24be8125b33e3a64ecfa4f9b88c. Mar 17 17:41:56.884375 systemd[1]: Started cri-containerd-d2df25af081e0db1f3cf13fba30327a227b3a36bf744bffbbfbb2e9d7614b508.scope - libcontainer container d2df25af081e0db1f3cf13fba30327a227b3a36bf744bffbbfbb2e9d7614b508. Mar 17 17:41:56.917090 kubelet[2204]: W0317 17:41:56.913039 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:56.917090 kubelet[2204]: E0317 17:41:56.913128 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:56.917903 kubelet[2204]: W0317 17:41:56.917842 2204 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.46:6443: connect: connection refused Mar 17 17:41:56.917967 kubelet[2204]: E0317 17:41:56.917914 2204 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.46:6443: connect: connection refused" logger="UnhandledError" Mar 17 17:41:56.919581 containerd[1471]: time="2025-03-17T17:41:56.919542725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:1038cd261a55920d8c090f4c59e6a888,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c51bb5740d8b7052bbe2f0faaeb89c55000e24be8125b33e3a64ecfa4f9b88c\"" Mar 17 17:41:56.921499 containerd[1471]: time="2025-03-17T17:41:56.921478597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:60762308083b5ef6c837b1be48ec53d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"985d27de4437cae386ab7af41b6955692119b96131c932c2c7f303b32c4054c9\"" Mar 17 17:41:56.923312 kubelet[2204]: E0317 17:41:56.923275 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:56.923504 kubelet[2204]: E0317 17:41:56.923465 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:56.926758 containerd[1471]: time="2025-03-17T17:41:56.926720649Z" level=info msg="CreateContainer within sandbox \"2c51bb5740d8b7052bbe2f0faaeb89c55000e24be8125b33e3a64ecfa4f9b88c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:41:56.927954 containerd[1471]: time="2025-03-17T17:41:56.927932714Z" level=info msg="CreateContainer within sandbox \"985d27de4437cae386ab7af41b6955692119b96131c932c2c7f303b32c4054c9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:41:56.936056 containerd[1471]: time="2025-03-17T17:41:56.936000629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:6f32907a07e55aea05abdc5cd284a8d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2df25af081e0db1f3cf13fba30327a227b3a36bf744bffbbfbb2e9d7614b508\"" Mar 17 17:41:56.936849 kubelet[2204]: E0317 17:41:56.936820 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:56.938865 containerd[1471]: time="2025-03-17T17:41:56.938820387Z" level=info msg="CreateContainer within sandbox \"d2df25af081e0db1f3cf13fba30327a227b3a36bf744bffbbfbb2e9d7614b508\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:41:56.954777 containerd[1471]: time="2025-03-17T17:41:56.954704458Z" level=info msg="CreateContainer within sandbox \"2c51bb5740d8b7052bbe2f0faaeb89c55000e24be8125b33e3a64ecfa4f9b88c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dc8f7ce4fafb5ae44f1166341bf00b9272440abd2dfa5a5e844f68bb18c302c4\"" Mar 17 17:41:56.955495 containerd[1471]: time="2025-03-17T17:41:56.955448616Z" level=info msg="StartContainer for \"dc8f7ce4fafb5ae44f1166341bf00b9272440abd2dfa5a5e844f68bb18c302c4\"" Mar 17 17:41:56.959193 containerd[1471]: time="2025-03-17T17:41:56.959058525Z" level=info msg="CreateContainer within sandbox \"985d27de4437cae386ab7af41b6955692119b96131c932c2c7f303b32c4054c9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"aa582ea5025adf8be0b0779ae17462315e52418b0486475223b39f208be3da12\"" Mar 17 17:41:56.959661 containerd[1471]: time="2025-03-17T17:41:56.959562564Z" level=info msg="StartContainer for \"aa582ea5025adf8be0b0779ae17462315e52418b0486475223b39f208be3da12\"" Mar 17 17:41:56.969528 containerd[1471]: time="2025-03-17T17:41:56.969470280Z" level=info msg="CreateContainer within sandbox \"d2df25af081e0db1f3cf13fba30327a227b3a36bf744bffbbfbb2e9d7614b508\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4215da08dcba739e65e0bf027a5c0d0b8cab79148ee2afc6c14213c6417ff30\"" Mar 17 17:41:56.970098 containerd[1471]: time="2025-03-17T17:41:56.970036263Z" level=info msg="StartContainer for \"a4215da08dcba739e65e0bf027a5c0d0b8cab79148ee2afc6c14213c6417ff30\"" Mar 17 17:41:56.991370 systemd[1]: Started cri-containerd-dc8f7ce4fafb5ae44f1166341bf00b9272440abd2dfa5a5e844f68bb18c302c4.scope - libcontainer container dc8f7ce4fafb5ae44f1166341bf00b9272440abd2dfa5a5e844f68bb18c302c4. Mar 17 17:41:56.996032 systemd[1]: Started cri-containerd-aa582ea5025adf8be0b0779ae17462315e52418b0486475223b39f208be3da12.scope - libcontainer container aa582ea5025adf8be0b0779ae17462315e52418b0486475223b39f208be3da12. Mar 17 17:41:57.002021 systemd[1]: Started cri-containerd-a4215da08dcba739e65e0bf027a5c0d0b8cab79148ee2afc6c14213c6417ff30.scope - libcontainer container a4215da08dcba739e65e0bf027a5c0d0b8cab79148ee2afc6c14213c6417ff30. Mar 17 17:41:57.047611 containerd[1471]: time="2025-03-17T17:41:57.047543135Z" level=info msg="StartContainer for \"dc8f7ce4fafb5ae44f1166341bf00b9272440abd2dfa5a5e844f68bb18c302c4\" returns successfully" Mar 17 17:41:57.055545 containerd[1471]: time="2025-03-17T17:41:57.055432679Z" level=info msg="StartContainer for \"a4215da08dcba739e65e0bf027a5c0d0b8cab79148ee2afc6c14213c6417ff30\" returns successfully" Mar 17 17:41:57.055545 containerd[1471]: time="2025-03-17T17:41:57.055493649Z" level=info msg="StartContainer for \"aa582ea5025adf8be0b0779ae17462315e52418b0486475223b39f208be3da12\" returns successfully" Mar 17 17:41:57.493533 kubelet[2204]: I0317 17:41:57.493490 2204 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:41:57.773249 kubelet[2204]: E0317 17:41:57.771643 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:57.777009 kubelet[2204]: E0317 17:41:57.776952 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:57.779681 kubelet[2204]: E0317 17:41:57.779643 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:58.166602 kubelet[2204]: I0317 17:41:58.166439 2204 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:41:58.166602 kubelet[2204]: E0317 17:41:58.166505 2204 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 17 17:41:58.178808 kubelet[2204]: E0317 17:41:58.178617 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.279216 kubelet[2204]: E0317 17:41:58.279126 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.380307 kubelet[2204]: E0317 17:41:58.380171 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.481402 kubelet[2204]: E0317 17:41:58.481223 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.582619 kubelet[2204]: E0317 17:41:58.581753 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.682933 kubelet[2204]: E0317 17:41:58.682654 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.783766 kubelet[2204]: E0317 17:41:58.783472 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.789045 kubelet[2204]: E0317 17:41:58.787955 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:41:58.884380 kubelet[2204]: E0317 17:41:58.884180 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:58.984434 kubelet[2204]: E0317 17:41:58.984341 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.085706 kubelet[2204]: E0317 17:41:59.085368 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.186395 kubelet[2204]: E0317 17:41:59.186306 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.286959 kubelet[2204]: E0317 17:41:59.286869 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.387188 kubelet[2204]: E0317 17:41:59.387011 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.487900 kubelet[2204]: E0317 17:41:59.487763 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.588853 kubelet[2204]: E0317 17:41:59.588724 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.690243 kubelet[2204]: E0317 17:41:59.689875 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.792792 kubelet[2204]: E0317 17:41:59.792619 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.893717 kubelet[2204]: E0317 17:41:59.893615 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:41:59.995738 kubelet[2204]: E0317 17:41:59.995614 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:00.096943 kubelet[2204]: E0317 17:42:00.096729 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:00.198164 kubelet[2204]: E0317 17:42:00.198109 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:00.315217 kubelet[2204]: E0317 17:42:00.314989 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:00.418823 kubelet[2204]: E0317 17:42:00.418762 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:00.519873 kubelet[2204]: E0317 17:42:00.519773 2204 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:00.730339 kubelet[2204]: I0317 17:42:00.730239 2204 apiserver.go:52] "Watching apiserver" Mar 17 17:42:00.735167 kubelet[2204]: I0317 17:42:00.735087 2204 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:42:00.846473 kubelet[2204]: E0317 17:42:00.846384 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:01.602499 kubelet[2204]: E0317 17:42:01.600203 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:01.806783 kubelet[2204]: E0317 17:42:01.805837 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:01.806783 kubelet[2204]: E0317 17:42:01.806117 2204 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:02.039853 systemd[1]: Reloading requested from client PID 2490 ('systemctl') (unit session-7.scope)... Mar 17 17:42:02.039874 systemd[1]: Reloading... Mar 17 17:42:02.222718 zram_generator::config[2529]: No configuration found. Mar 17 17:42:02.425735 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:42:02.566905 systemd[1]: Reloading finished in 526 ms. Mar 17 17:42:02.621715 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:02.641478 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:42:02.641888 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:02.641975 systemd[1]: kubelet.service: Consumed 1.053s CPU time, 118.8M memory peak, 0B memory swap peak. Mar 17 17:42:02.652543 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:42:02.820847 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:42:02.830118 update_engine[1459]: I20250317 17:42:02.830029 1459 update_attempter.cc:509] Updating boot flags... Mar 17 17:42:02.834625 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:42:02.877271 kubelet[2574]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:42:02.877271 kubelet[2574]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:42:02.877271 kubelet[2574]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:42:02.877720 kubelet[2574]: I0317 17:42:02.877330 2574 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:42:02.885172 kubelet[2574]: I0317 17:42:02.885013 2574 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Mar 17 17:42:02.885172 kubelet[2574]: I0317 17:42:02.885057 2574 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:42:02.885380 kubelet[2574]: I0317 17:42:02.885345 2574 server.go:929] "Client rotation is on, will bootstrap in background" Mar 17 17:42:02.886873 kubelet[2574]: I0317 17:42:02.886839 2574 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:42:02.889462 kubelet[2574]: I0317 17:42:02.888944 2574 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:42:02.894214 kubelet[2574]: E0317 17:42:02.893493 2574 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 17 17:42:02.894214 kubelet[2574]: I0317 17:42:02.893536 2574 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 17 17:42:02.899273 kubelet[2574]: I0317 17:42:02.899202 2574 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:42:02.899460 kubelet[2574]: I0317 17:42:02.899371 2574 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 17 17:42:02.899651 kubelet[2574]: I0317 17:42:02.899580 2574 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:42:02.900040 kubelet[2574]: I0317 17:42:02.899642 2574 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 17 17:42:02.900040 kubelet[2574]: I0317 17:42:02.900038 2574 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:42:02.900214 kubelet[2574]: I0317 17:42:02.900051 2574 container_manager_linux.go:300] "Creating device plugin manager" Mar 17 17:42:02.900214 kubelet[2574]: I0317 17:42:02.900107 2574 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:42:02.901130 kubelet[2574]: I0317 17:42:02.900434 2574 kubelet.go:408] "Attempting to sync node with API server" Mar 17 17:42:02.901130 kubelet[2574]: I0317 17:42:02.900484 2574 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:42:02.901130 kubelet[2574]: I0317 17:42:02.900907 2574 kubelet.go:314] "Adding apiserver pod source" Mar 17 17:42:02.901940 kubelet[2574]: I0317 17:42:02.901352 2574 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:42:02.904551 kubelet[2574]: I0317 17:42:02.902672 2574 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:42:02.904551 kubelet[2574]: I0317 17:42:02.903165 2574 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:42:02.904551 kubelet[2574]: I0317 17:42:02.903755 2574 server.go:1269] "Started kubelet" Mar 17 17:42:02.904551 kubelet[2574]: I0317 17:42:02.903916 2574 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:42:02.904854 kubelet[2574]: I0317 17:42:02.904825 2574 server.go:460] "Adding debug handlers to kubelet server" Mar 17 17:42:02.906765 kubelet[2574]: I0317 17:42:02.906253 2574 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:42:02.906765 kubelet[2574]: I0317 17:42:02.906559 2574 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:42:02.907026 kubelet[2574]: I0317 17:42:02.906999 2574 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:42:02.910774 kubelet[2574]: I0317 17:42:02.910756 2574 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 17 17:42:02.914272 kubelet[2574]: E0317 17:42:02.914238 2574 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:42:02.914349 kubelet[2574]: I0317 17:42:02.914340 2574 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 17 17:42:02.914559 kubelet[2574]: I0317 17:42:02.914544 2574 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 17 17:42:02.914756 kubelet[2574]: I0317 17:42:02.914744 2574 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:42:02.916181 kubelet[2574]: I0317 17:42:02.916164 2574 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:42:02.916331 kubelet[2574]: I0317 17:42:02.916314 2574 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:42:02.917977 kubelet[2574]: I0317 17:42:02.917963 2574 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:42:02.929617 kubelet[2574]: I0317 17:42:02.929568 2574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:42:02.932211 kubelet[2574]: E0317 17:42:02.932166 2574 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:42:02.933184 kubelet[2574]: I0317 17:42:02.933164 2574 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:42:02.933302 kubelet[2574]: I0317 17:42:02.933287 2574 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:42:02.933710 kubelet[2574]: I0317 17:42:02.933683 2574 kubelet.go:2321] "Starting kubelet main sync loop" Mar 17 17:42:02.933776 kubelet[2574]: E0317 17:42:02.933761 2574 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:42:02.983467 kubelet[2574]: I0317 17:42:02.983429 2574 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:42:02.983467 kubelet[2574]: I0317 17:42:02.983445 2574 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:42:02.983467 kubelet[2574]: I0317 17:42:02.983465 2574 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:42:02.983682 kubelet[2574]: I0317 17:42:02.983620 2574 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:42:02.983682 kubelet[2574]: I0317 17:42:02.983646 2574 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:42:02.983682 kubelet[2574]: I0317 17:42:02.983665 2574 policy_none.go:49] "None policy: Start" Mar 17 17:42:02.984269 kubelet[2574]: I0317 17:42:02.984247 2574 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:42:02.984269 kubelet[2574]: I0317 17:42:02.984270 2574 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:42:02.984403 kubelet[2574]: I0317 17:42:02.984386 2574 state_mem.go:75] "Updated machine memory state" Mar 17 17:42:02.989178 kubelet[2574]: I0317 17:42:02.988932 2574 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:42:02.989178 kubelet[2574]: I0317 17:42:02.989132 2574 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 17 17:42:02.989178 kubelet[2574]: I0317 17:42:02.989143 2574 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:42:02.990445 kubelet[2574]: I0317 17:42:02.989383 2574 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:42:03.080884 kubelet[2574]: E0317 17:42:03.080099 2574 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:03.080884 kubelet[2574]: E0317 17:42:03.080136 2574 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:42:03.097661 kubelet[2574]: I0317 17:42:03.097335 2574 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Mar 17 17:42:03.100288 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2610) Mar 17 17:42:03.108703 kubelet[2574]: I0317 17:42:03.108503 2574 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Mar 17 17:42:03.108703 kubelet[2574]: I0317 17:42:03.108598 2574 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Mar 17 17:42:03.170115 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2611) Mar 17 17:42:03.216115 kubelet[2574]: I0317 17:42:03.216043 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:03.216115 kubelet[2574]: I0317 17:42:03.216117 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1038cd261a55920d8c090f4c59e6a888-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"1038cd261a55920d8c090f4c59e6a888\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:42:03.216340 kubelet[2574]: I0317 17:42:03.216145 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:03.216340 kubelet[2574]: I0317 17:42:03.216166 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:03.216340 kubelet[2574]: I0317 17:42:03.216187 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f32907a07e55aea05abdc5cd284a8d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"6f32907a07e55aea05abdc5cd284a8d5\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:42:03.216340 kubelet[2574]: I0317 17:42:03.216205 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1038cd261a55920d8c090f4c59e6a888-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"1038cd261a55920d8c090f4c59e6a888\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:42:03.216340 kubelet[2574]: I0317 17:42:03.216222 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1038cd261a55920d8c090f4c59e6a888-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"1038cd261a55920d8c090f4c59e6a888\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:42:03.216498 kubelet[2574]: I0317 17:42:03.216241 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:03.216498 kubelet[2574]: I0317 17:42:03.216259 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/60762308083b5ef6c837b1be48ec53d6-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"60762308083b5ef6c837b1be48ec53d6\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:42:03.381033 kubelet[2574]: E0317 17:42:03.380867 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:03.381033 kubelet[2574]: E0317 17:42:03.381021 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:03.381645 kubelet[2574]: E0317 17:42:03.381224 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:03.922143 kubelet[2574]: I0317 17:42:03.922085 2574 apiserver.go:52] "Watching apiserver" Mar 17 17:42:03.965439 kubelet[2574]: E0317 17:42:03.965002 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:03.965439 kubelet[2574]: E0317 17:42:03.965244 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:03.965439 kubelet[2574]: E0317 17:42:03.965423 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:03.995441 kubelet[2574]: I0317 17:42:03.995370 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.995351499 podStartE2EDuration="3.995351499s" podCreationTimestamp="2025-03-17 17:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:42:03.995192558 +0000 UTC m=+1.155614046" watchObservedRunningTime="2025-03-17 17:42:03.995351499 +0000 UTC m=+1.155772987" Mar 17 17:42:03.995642 kubelet[2574]: I0317 17:42:03.995497 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.995492248 podStartE2EDuration="995.492248ms" podCreationTimestamp="2025-03-17 17:42:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:42:03.987623495 +0000 UTC m=+1.148044983" watchObservedRunningTime="2025-03-17 17:42:03.995492248 +0000 UTC m=+1.155913736" Mar 17 17:42:04.007099 kubelet[2574]: I0317 17:42:04.007024 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.006993171 podStartE2EDuration="3.006993171s" podCreationTimestamp="2025-03-17 17:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:42:04.0069347 +0000 UTC m=+1.167356188" watchObservedRunningTime="2025-03-17 17:42:04.006993171 +0000 UTC m=+1.167414659" Mar 17 17:42:04.015339 kubelet[2574]: I0317 17:42:04.015274 2574 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 17 17:42:04.968928 kubelet[2574]: E0317 17:42:04.967551 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:04.968928 kubelet[2574]: E0317 17:42:04.967787 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:06.239325 kubelet[2574]: I0317 17:42:06.239269 2574 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:42:06.243204 containerd[1471]: time="2025-03-17T17:42:06.243151533Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:42:06.243706 kubelet[2574]: I0317 17:42:06.243493 2574 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:42:07.262488 kubelet[2574]: I0317 17:42:07.258650 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbb6f2da-9180-4e32-a386-8f3e626fec8e-xtables-lock\") pod \"kube-proxy-jgpct\" (UID: \"cbb6f2da-9180-4e32-a386-8f3e626fec8e\") " pod="kube-system/kube-proxy-jgpct" Mar 17 17:42:07.262488 kubelet[2574]: I0317 17:42:07.261156 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzppl\" (UniqueName: \"kubernetes.io/projected/cbb6f2da-9180-4e32-a386-8f3e626fec8e-kube-api-access-mzppl\") pod \"kube-proxy-jgpct\" (UID: \"cbb6f2da-9180-4e32-a386-8f3e626fec8e\") " pod="kube-system/kube-proxy-jgpct" Mar 17 17:42:07.263795 kubelet[2574]: I0317 17:42:07.263618 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cbb6f2da-9180-4e32-a386-8f3e626fec8e-kube-proxy\") pod \"kube-proxy-jgpct\" (UID: \"cbb6f2da-9180-4e32-a386-8f3e626fec8e\") " pod="kube-system/kube-proxy-jgpct" Mar 17 17:42:07.263795 kubelet[2574]: I0317 17:42:07.263671 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbb6f2da-9180-4e32-a386-8f3e626fec8e-lib-modules\") pod \"kube-proxy-jgpct\" (UID: \"cbb6f2da-9180-4e32-a386-8f3e626fec8e\") " pod="kube-system/kube-proxy-jgpct" Mar 17 17:42:07.272413 systemd[1]: Created slice kubepods-besteffort-podcbb6f2da_9180_4e32_a386_8f3e626fec8e.slice - libcontainer container kubepods-besteffort-podcbb6f2da_9180_4e32_a386_8f3e626fec8e.slice. Mar 17 17:42:07.377782 systemd[1]: Created slice kubepods-besteffort-pod71744967_0251_4ca3_a165_ad8537685712.slice - libcontainer container kubepods-besteffort-pod71744967_0251_4ca3_a165_ad8537685712.slice. Mar 17 17:42:07.464978 kubelet[2574]: I0317 17:42:07.464785 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/71744967-0251-4ca3-a165-ad8537685712-var-lib-calico\") pod \"tigera-operator-64ff5465b7-jr72j\" (UID: \"71744967-0251-4ca3-a165-ad8537685712\") " pod="tigera-operator/tigera-operator-64ff5465b7-jr72j" Mar 17 17:42:07.464978 kubelet[2574]: I0317 17:42:07.464869 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqp7d\" (UniqueName: \"kubernetes.io/projected/71744967-0251-4ca3-a165-ad8537685712-kube-api-access-lqp7d\") pod \"tigera-operator-64ff5465b7-jr72j\" (UID: \"71744967-0251-4ca3-a165-ad8537685712\") " pod="tigera-operator/tigera-operator-64ff5465b7-jr72j" Mar 17 17:42:07.596508 kubelet[2574]: E0317 17:42:07.595686 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:07.597466 containerd[1471]: time="2025-03-17T17:42:07.597277080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgpct,Uid:cbb6f2da-9180-4e32-a386-8f3e626fec8e,Namespace:kube-system,Attempt:0,}" Mar 17 17:42:07.683014 containerd[1471]: time="2025-03-17T17:42:07.682847238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:07.683014 containerd[1471]: time="2025-03-17T17:42:07.682925348Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:07.683014 containerd[1471]: time="2025-03-17T17:42:07.682967420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:07.692403 containerd[1471]: time="2025-03-17T17:42:07.689679794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-jr72j,Uid:71744967-0251-4ca3-a165-ad8537685712,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:42:07.692403 containerd[1471]: time="2025-03-17T17:42:07.688662125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:07.738326 kubelet[2574]: E0317 17:42:07.738278 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:07.769265 systemd[1]: Started cri-containerd-4e7baeddfb75088c81b2f3e90ab7b6faeea630e7d804e3f52dff590aa160268e.scope - libcontainer container 4e7baeddfb75088c81b2f3e90ab7b6faeea630e7d804e3f52dff590aa160268e. Mar 17 17:42:07.797897 containerd[1471]: time="2025-03-17T17:42:07.797615806Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:07.797897 containerd[1471]: time="2025-03-17T17:42:07.797872185Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:07.798204 containerd[1471]: time="2025-03-17T17:42:07.797935112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:07.798326 containerd[1471]: time="2025-03-17T17:42:07.798219913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:07.852478 systemd[1]: Started cri-containerd-14dbadb1e3c4ff70dfbb6b97edc08ad5c52819ff56a98605cad45dee2d3ce6ea.scope - libcontainer container 14dbadb1e3c4ff70dfbb6b97edc08ad5c52819ff56a98605cad45dee2d3ce6ea. Mar 17 17:42:07.933165 containerd[1471]: time="2025-03-17T17:42:07.930089287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgpct,Uid:cbb6f2da-9180-4e32-a386-8f3e626fec8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e7baeddfb75088c81b2f3e90ab7b6faeea630e7d804e3f52dff590aa160268e\"" Mar 17 17:42:07.937194 kubelet[2574]: E0317 17:42:07.936620 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:07.969349 containerd[1471]: time="2025-03-17T17:42:07.966253705Z" level=info msg="CreateContainer within sandbox \"4e7baeddfb75088c81b2f3e90ab7b6faeea630e7d804e3f52dff590aa160268e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:42:08.030951 kubelet[2574]: E0317 17:42:08.016777 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:08.042301 containerd[1471]: time="2025-03-17T17:42:08.027833998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-64ff5465b7-jr72j,Uid:71744967-0251-4ca3-a165-ad8537685712,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"14dbadb1e3c4ff70dfbb6b97edc08ad5c52819ff56a98605cad45dee2d3ce6ea\"" Mar 17 17:42:08.042301 containerd[1471]: time="2025-03-17T17:42:08.040548767Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:42:08.133441 containerd[1471]: time="2025-03-17T17:42:08.130262408Z" level=info msg="CreateContainer within sandbox \"4e7baeddfb75088c81b2f3e90ab7b6faeea630e7d804e3f52dff590aa160268e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b6efa2e4f41c9abfa0c18f6275515766f0bebce908e87b2c3dd04431ab53dbc\"" Mar 17 17:42:08.140890 containerd[1471]: time="2025-03-17T17:42:08.140127327Z" level=info msg="StartContainer for \"0b6efa2e4f41c9abfa0c18f6275515766f0bebce908e87b2c3dd04431ab53dbc\"" Mar 17 17:42:08.261666 systemd[1]: Started cri-containerd-0b6efa2e4f41c9abfa0c18f6275515766f0bebce908e87b2c3dd04431ab53dbc.scope - libcontainer container 0b6efa2e4f41c9abfa0c18f6275515766f0bebce908e87b2c3dd04431ab53dbc. Mar 17 17:42:08.265368 sudo[1656]: pam_unix(sudo:session): session closed for user root Mar 17 17:42:08.267542 sshd[1655]: Connection closed by 10.0.0.1 port 41420 Mar 17 17:42:08.268908 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:08.278601 systemd[1]: sshd@7-10.0.0.46:22-10.0.0.1:41420.service: Deactivated successfully. Mar 17 17:42:08.285599 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:42:08.286137 systemd[1]: session-7.scope: Consumed 5.232s CPU time, 150.1M memory peak, 0B memory swap peak. Mar 17 17:42:08.287683 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:42:08.292946 systemd-logind[1456]: Removed session 7. Mar 17 17:42:08.353167 containerd[1471]: time="2025-03-17T17:42:08.350095985Z" level=info msg="StartContainer for \"0b6efa2e4f41c9abfa0c18f6275515766f0bebce908e87b2c3dd04431ab53dbc\" returns successfully" Mar 17 17:42:09.024373 kubelet[2574]: E0317 17:42:09.023914 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:09.024373 kubelet[2574]: E0317 17:42:09.023966 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:09.281497 kubelet[2574]: I0317 17:42:09.280974 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jgpct" podStartSLOduration=2.280888701 podStartE2EDuration="2.280888701s" podCreationTimestamp="2025-03-17 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:42:09.279881201 +0000 UTC m=+6.440302689" watchObservedRunningTime="2025-03-17 17:42:09.280888701 +0000 UTC m=+6.441310189" Mar 17 17:42:09.919200 kubelet[2574]: E0317 17:42:09.919148 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:10.025081 kubelet[2574]: E0317 17:42:10.025036 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:10.025648 kubelet[2574]: E0317 17:42:10.025196 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:11.279606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2285698683.mount: Deactivated successfully. Mar 17 17:42:11.709720 containerd[1471]: time="2025-03-17T17:42:11.709528823Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:11.711316 containerd[1471]: time="2025-03-17T17:42:11.711282204Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 17 17:42:11.713144 containerd[1471]: time="2025-03-17T17:42:11.713097285Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:11.715957 containerd[1471]: time="2025-03-17T17:42:11.715877885Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:11.716619 containerd[1471]: time="2025-03-17T17:42:11.716558772Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 3.675967734s" Mar 17 17:42:11.716619 containerd[1471]: time="2025-03-17T17:42:11.716610771Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 17 17:42:11.719231 containerd[1471]: time="2025-03-17T17:42:11.719193674Z" level=info msg="CreateContainer within sandbox \"14dbadb1e3c4ff70dfbb6b97edc08ad5c52819ff56a98605cad45dee2d3ce6ea\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:42:11.744941 containerd[1471]: time="2025-03-17T17:42:11.744187883Z" level=info msg="CreateContainer within sandbox \"14dbadb1e3c4ff70dfbb6b97edc08ad5c52819ff56a98605cad45dee2d3ce6ea\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8097da46c7a7c29dd3c1b0cd773075f0cdd440416cf666b6a087c603c17210b6\"" Mar 17 17:42:11.746103 containerd[1471]: time="2025-03-17T17:42:11.745373356Z" level=info msg="StartContainer for \"8097da46c7a7c29dd3c1b0cd773075f0cdd440416cf666b6a087c603c17210b6\"" Mar 17 17:42:11.795792 systemd[1]: Started cri-containerd-8097da46c7a7c29dd3c1b0cd773075f0cdd440416cf666b6a087c603c17210b6.scope - libcontainer container 8097da46c7a7c29dd3c1b0cd773075f0cdd440416cf666b6a087c603c17210b6. Mar 17 17:42:11.845922 containerd[1471]: time="2025-03-17T17:42:11.845860468Z" level=info msg="StartContainer for \"8097da46c7a7c29dd3c1b0cd773075f0cdd440416cf666b6a087c603c17210b6\" returns successfully" Mar 17 17:42:12.041113 kubelet[2574]: I0317 17:42:12.040770 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-64ff5465b7-jr72j" podStartSLOduration=1.362891012 podStartE2EDuration="5.040747414s" podCreationTimestamp="2025-03-17 17:42:07 +0000 UTC" firstStartedPulling="2025-03-17 17:42:08.039966148 +0000 UTC m=+5.200387636" lastFinishedPulling="2025-03-17 17:42:11.71782254 +0000 UTC m=+8.878244038" observedRunningTime="2025-03-17 17:42:12.040619777 +0000 UTC m=+9.201041265" watchObservedRunningTime="2025-03-17 17:42:12.040747414 +0000 UTC m=+9.201168902" Mar 17 17:42:14.902492 kubelet[2574]: E0317 17:42:14.901980 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:15.614345 systemd[1]: Created slice kubepods-besteffort-pod6d965ea0_91b3_4afe_bdab_68a6e4823d65.slice - libcontainer container kubepods-besteffort-pod6d965ea0_91b3_4afe_bdab_68a6e4823d65.slice. Mar 17 17:42:15.650121 kubelet[2574]: I0317 17:42:15.650041 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d965ea0-91b3-4afe-bdab-68a6e4823d65-tigera-ca-bundle\") pod \"calico-typha-6478f5dd9d-wxsjc\" (UID: \"6d965ea0-91b3-4afe-bdab-68a6e4823d65\") " pod="calico-system/calico-typha-6478f5dd9d-wxsjc" Mar 17 17:42:15.650121 kubelet[2574]: I0317 17:42:15.650114 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d965ea0-91b3-4afe-bdab-68a6e4823d65-typha-certs\") pod \"calico-typha-6478f5dd9d-wxsjc\" (UID: \"6d965ea0-91b3-4afe-bdab-68a6e4823d65\") " pod="calico-system/calico-typha-6478f5dd9d-wxsjc" Mar 17 17:42:15.650121 kubelet[2574]: I0317 17:42:15.650134 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gpwv\" (UniqueName: \"kubernetes.io/projected/6d965ea0-91b3-4afe-bdab-68a6e4823d65-kube-api-access-7gpwv\") pod \"calico-typha-6478f5dd9d-wxsjc\" (UID: \"6d965ea0-91b3-4afe-bdab-68a6e4823d65\") " pod="calico-system/calico-typha-6478f5dd9d-wxsjc" Mar 17 17:42:15.685516 systemd[1]: Created slice kubepods-besteffort-podd6fdada1_a6e3_4d09_9a38_2678efd09fe5.slice - libcontainer container kubepods-besteffort-podd6fdada1_a6e3_4d09_9a38_2678efd09fe5.slice. Mar 17 17:42:15.751276 kubelet[2574]: I0317 17:42:15.751205 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-policysync\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751276 kubelet[2574]: I0317 17:42:15.751258 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-cni-net-dir\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751276 kubelet[2574]: I0317 17:42:15.751277 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57n4h\" (UniqueName: \"kubernetes.io/projected/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-kube-api-access-57n4h\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751522 kubelet[2574]: I0317 17:42:15.751301 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-xtables-lock\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751522 kubelet[2574]: I0317 17:42:15.751319 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-tigera-ca-bundle\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751522 kubelet[2574]: I0317 17:42:15.751337 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-cni-bin-dir\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751522 kubelet[2574]: I0317 17:42:15.751381 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-node-certs\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751522 kubelet[2574]: I0317 17:42:15.751398 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-cni-log-dir\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751694 kubelet[2574]: I0317 17:42:15.751415 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-flexvol-driver-host\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751694 kubelet[2574]: I0317 17:42:15.751433 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-lib-modules\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751694 kubelet[2574]: I0317 17:42:15.751449 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-var-run-calico\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.751694 kubelet[2574]: I0317 17:42:15.751464 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d6fdada1-a6e3-4d09-9a38-2678efd09fe5-var-lib-calico\") pod \"calico-node-n5pnx\" (UID: \"d6fdada1-a6e3-4d09-9a38-2678efd09fe5\") " pod="calico-system/calico-node-n5pnx" Mar 17 17:42:15.795673 kubelet[2574]: E0317 17:42:15.795573 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:15.852629 kubelet[2574]: I0317 17:42:15.852575 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d-varrun\") pod \"csi-node-driver-lsjhx\" (UID: \"c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d\") " pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:15.852629 kubelet[2574]: I0317 17:42:15.852629 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d-kubelet-dir\") pod \"csi-node-driver-lsjhx\" (UID: \"c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d\") " pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:15.852806 kubelet[2574]: I0317 17:42:15.852720 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d-socket-dir\") pod \"csi-node-driver-lsjhx\" (UID: \"c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d\") " pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:15.852806 kubelet[2574]: I0317 17:42:15.852757 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d-registration-dir\") pod \"csi-node-driver-lsjhx\" (UID: \"c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d\") " pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:15.852806 kubelet[2574]: I0317 17:42:15.852794 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmzsf\" (UniqueName: \"kubernetes.io/projected/c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d-kube-api-access-tmzsf\") pod \"csi-node-driver-lsjhx\" (UID: \"c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d\") " pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:15.864334 kubelet[2574]: E0317 17:42:15.864155 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.864334 kubelet[2574]: W0317 17:42:15.864178 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.864334 kubelet[2574]: E0317 17:42:15.864208 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.864588 kubelet[2574]: E0317 17:42:15.864509 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.864588 kubelet[2574]: W0317 17:42:15.864518 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.864588 kubelet[2574]: E0317 17:42:15.864527 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.921406 kubelet[2574]: E0317 17:42:15.921235 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:15.922560 containerd[1471]: time="2025-03-17T17:42:15.922513170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6478f5dd9d-wxsjc,Uid:6d965ea0-91b3-4afe-bdab-68a6e4823d65,Namespace:calico-system,Attempt:0,}" Mar 17 17:42:15.955477 kubelet[2574]: E0317 17:42:15.955285 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.955477 kubelet[2574]: W0317 17:42:15.955314 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.955477 kubelet[2574]: E0317 17:42:15.955339 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.956500 kubelet[2574]: E0317 17:42:15.956476 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.956500 kubelet[2574]: W0317 17:42:15.956494 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.956584 kubelet[2574]: E0317 17:42:15.956516 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.957356 kubelet[2574]: E0317 17:42:15.957337 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.957356 kubelet[2574]: W0317 17:42:15.957352 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.957614 kubelet[2574]: E0317 17:42:15.957595 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.957614 kubelet[2574]: W0317 17:42:15.957611 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.957995 kubelet[2574]: E0317 17:42:15.957976 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.957995 kubelet[2574]: W0317 17:42:15.957992 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.958166 kubelet[2574]: E0317 17:42:15.958007 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.958420 kubelet[2574]: E0317 17:42:15.958403 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.958420 kubelet[2574]: W0317 17:42:15.958419 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.958505 kubelet[2574]: E0317 17:42:15.958431 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.958505 kubelet[2574]: E0317 17:42:15.958455 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.958505 kubelet[2574]: E0317 17:42:15.958483 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.958736 kubelet[2574]: E0317 17:42:15.958720 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.958736 kubelet[2574]: W0317 17:42:15.958733 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.958790 kubelet[2574]: E0317 17:42:15.958749 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.959116 kubelet[2574]: E0317 17:42:15.959096 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.959116 kubelet[2574]: W0317 17:42:15.959113 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.959191 kubelet[2574]: E0317 17:42:15.959141 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.959912 kubelet[2574]: E0317 17:42:15.959415 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.959912 kubelet[2574]: W0317 17:42:15.959430 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.959912 kubelet[2574]: E0317 17:42:15.959507 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.959912 kubelet[2574]: E0317 17:42:15.959837 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.959912 kubelet[2574]: W0317 17:42:15.959861 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.959912 kubelet[2574]: E0317 17:42:15.959895 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.960230 kubelet[2574]: E0317 17:42:15.960203 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.960230 kubelet[2574]: W0317 17:42:15.960215 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.960318 kubelet[2574]: E0317 17:42:15.960290 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.960574 kubelet[2574]: E0317 17:42:15.960549 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.960574 kubelet[2574]: W0317 17:42:15.960564 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.960650 kubelet[2574]: E0317 17:42:15.960604 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.960876 kubelet[2574]: E0317 17:42:15.960845 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.960876 kubelet[2574]: W0317 17:42:15.960874 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.960960 kubelet[2574]: E0317 17:42:15.960890 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.961181 kubelet[2574]: E0317 17:42:15.961164 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.961181 kubelet[2574]: W0317 17:42:15.961178 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.961263 kubelet[2574]: E0317 17:42:15.961197 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.962139 kubelet[2574]: E0317 17:42:15.962119 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.962139 kubelet[2574]: W0317 17:42:15.962134 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.962234 kubelet[2574]: E0317 17:42:15.962155 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.962451 kubelet[2574]: E0317 17:42:15.962417 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.962451 kubelet[2574]: W0317 17:42:15.962433 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.962451 kubelet[2574]: E0317 17:42:15.962452 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.962881 kubelet[2574]: E0317 17:42:15.962866 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.962881 kubelet[2574]: W0317 17:42:15.962880 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.962991 kubelet[2574]: E0317 17:42:15.962923 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.963147 kubelet[2574]: E0317 17:42:15.963130 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.963147 kubelet[2574]: W0317 17:42:15.963145 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.963250 kubelet[2574]: E0317 17:42:15.963224 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.963473 kubelet[2574]: E0317 17:42:15.963456 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.963473 kubelet[2574]: W0317 17:42:15.963471 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.963570 kubelet[2574]: E0317 17:42:15.963488 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.963754 kubelet[2574]: E0317 17:42:15.963737 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.963754 kubelet[2574]: W0317 17:42:15.963753 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.963829 kubelet[2574]: E0317 17:42:15.963769 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.964116 kubelet[2574]: E0317 17:42:15.964045 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.964116 kubelet[2574]: W0317 17:42:15.964075 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.964116 kubelet[2574]: E0317 17:42:15.964094 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.966581 kubelet[2574]: E0317 17:42:15.966551 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.966699 kubelet[2574]: W0317 17:42:15.966566 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.966699 kubelet[2574]: E0317 17:42:15.966696 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.967387 kubelet[2574]: E0317 17:42:15.967364 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.967438 kubelet[2574]: W0317 17:42:15.967390 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.967438 kubelet[2574]: E0317 17:42:15.967403 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.967870 kubelet[2574]: E0317 17:42:15.967768 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.967870 kubelet[2574]: W0317 17:42:15.967807 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.967870 kubelet[2574]: E0317 17:42:15.967820 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.968473 kubelet[2574]: E0317 17:42:15.968456 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.968620 kubelet[2574]: W0317 17:42:15.968559 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.968620 kubelet[2574]: E0317 17:42:15.968589 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.973329 containerd[1471]: time="2025-03-17T17:42:15.973243883Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:15.974464 containerd[1471]: time="2025-03-17T17:42:15.974265194Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:15.974464 containerd[1471]: time="2025-03-17T17:42:15.974292180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:15.974464 containerd[1471]: time="2025-03-17T17:42:15.974390903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:15.975392 kubelet[2574]: E0317 17:42:15.975360 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:15.975392 kubelet[2574]: W0317 17:42:15.975379 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:15.975486 kubelet[2574]: E0317 17:42:15.975398 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:15.988493 kubelet[2574]: E0317 17:42:15.988454 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:15.989016 containerd[1471]: time="2025-03-17T17:42:15.988972369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5pnx,Uid:d6fdada1-a6e3-4d09-9a38-2678efd09fe5,Namespace:calico-system,Attempt:0,}" Mar 17 17:42:16.001425 systemd[1]: Started cri-containerd-8962b6595b9c1909a28f2d7e3fc19339ce0e9e3bf131fb47649a56d1d6371532.scope - libcontainer container 8962b6595b9c1909a28f2d7e3fc19339ce0e9e3bf131fb47649a56d1d6371532. Mar 17 17:42:16.017977 containerd[1471]: time="2025-03-17T17:42:16.017799301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:16.017977 containerd[1471]: time="2025-03-17T17:42:16.017901932Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:16.017977 containerd[1471]: time="2025-03-17T17:42:16.017948287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:16.018165 containerd[1471]: time="2025-03-17T17:42:16.018046508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:16.042223 systemd[1]: Started cri-containerd-85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b.scope - libcontainer container 85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b. Mar 17 17:42:16.054946 containerd[1471]: time="2025-03-17T17:42:16.054831212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6478f5dd9d-wxsjc,Uid:6d965ea0-91b3-4afe-bdab-68a6e4823d65,Namespace:calico-system,Attempt:0,} returns sandbox id \"8962b6595b9c1909a28f2d7e3fc19339ce0e9e3bf131fb47649a56d1d6371532\"" Mar 17 17:42:16.056396 kubelet[2574]: E0317 17:42:16.056349 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:16.058223 containerd[1471]: time="2025-03-17T17:42:16.057981076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:42:16.074126 containerd[1471]: time="2025-03-17T17:42:16.074081314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-n5pnx,Uid:d6fdada1-a6e3-4d09-9a38-2678efd09fe5,Namespace:calico-system,Attempt:0,} returns sandbox id \"85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b\"" Mar 17 17:42:16.075182 kubelet[2574]: E0317 17:42:16.075149 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:17.935023 kubelet[2574]: E0317 17:42:17.934947 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:18.638262 containerd[1471]: time="2025-03-17T17:42:18.637577942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:18.641614 containerd[1471]: time="2025-03-17T17:42:18.641532339Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 17 17:42:18.645341 containerd[1471]: time="2025-03-17T17:42:18.645203945Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:18.652595 containerd[1471]: time="2025-03-17T17:42:18.652408375Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:18.653058 containerd[1471]: time="2025-03-17T17:42:18.653015485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 2.595002173s" Mar 17 17:42:18.653173 containerd[1471]: time="2025-03-17T17:42:18.653055967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 17 17:42:18.656236 containerd[1471]: time="2025-03-17T17:42:18.655716645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:42:18.672119 containerd[1471]: time="2025-03-17T17:42:18.670244826Z" level=info msg="CreateContainer within sandbox \"8962b6595b9c1909a28f2d7e3fc19339ce0e9e3bf131fb47649a56d1d6371532\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:42:18.744563 containerd[1471]: time="2025-03-17T17:42:18.744453919Z" level=info msg="CreateContainer within sandbox \"8962b6595b9c1909a28f2d7e3fc19339ce0e9e3bf131fb47649a56d1d6371532\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1b3af490df1e4e310ce42252ed6885fc9066c8a0701bc70aeb61368171faacd6\"" Mar 17 17:42:18.746206 containerd[1471]: time="2025-03-17T17:42:18.746056825Z" level=info msg="StartContainer for \"1b3af490df1e4e310ce42252ed6885fc9066c8a0701bc70aeb61368171faacd6\"" Mar 17 17:42:18.811735 systemd[1]: Started cri-containerd-1b3af490df1e4e310ce42252ed6885fc9066c8a0701bc70aeb61368171faacd6.scope - libcontainer container 1b3af490df1e4e310ce42252ed6885fc9066c8a0701bc70aeb61368171faacd6. Mar 17 17:42:18.933538 containerd[1471]: time="2025-03-17T17:42:18.933354416Z" level=info msg="StartContainer for \"1b3af490df1e4e310ce42252ed6885fc9066c8a0701bc70aeb61368171faacd6\" returns successfully" Mar 17 17:42:19.057830 kubelet[2574]: E0317 17:42:19.057796 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:19.152667 kubelet[2574]: E0317 17:42:19.152608 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.152667 kubelet[2574]: W0317 17:42:19.152641 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.152667 kubelet[2574]: E0317 17:42:19.152666 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.153013 kubelet[2574]: E0317 17:42:19.152988 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.153013 kubelet[2574]: W0317 17:42:19.153000 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.153013 kubelet[2574]: E0317 17:42:19.153011 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.153366 kubelet[2574]: E0317 17:42:19.153322 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.153366 kubelet[2574]: W0317 17:42:19.153335 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.153366 kubelet[2574]: E0317 17:42:19.153346 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.153592 kubelet[2574]: E0317 17:42:19.153576 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.153592 kubelet[2574]: W0317 17:42:19.153589 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.153641 kubelet[2574]: E0317 17:42:19.153599 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.153886 kubelet[2574]: E0317 17:42:19.153858 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.153886 kubelet[2574]: W0317 17:42:19.153873 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.153886 kubelet[2574]: E0317 17:42:19.153884 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.154201 kubelet[2574]: E0317 17:42:19.154185 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.154201 kubelet[2574]: W0317 17:42:19.154198 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.154289 kubelet[2574]: E0317 17:42:19.154209 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.154582 kubelet[2574]: E0317 17:42:19.154548 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.154582 kubelet[2574]: W0317 17:42:19.154575 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.154691 kubelet[2574]: E0317 17:42:19.154602 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.154885 kubelet[2574]: E0317 17:42:19.154867 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.154885 kubelet[2574]: W0317 17:42:19.154878 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.154885 kubelet[2574]: E0317 17:42:19.154887 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.155189 kubelet[2574]: E0317 17:42:19.155173 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.155189 kubelet[2574]: W0317 17:42:19.155185 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.155252 kubelet[2574]: E0317 17:42:19.155193 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.155443 kubelet[2574]: E0317 17:42:19.155428 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.155443 kubelet[2574]: W0317 17:42:19.155441 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.155494 kubelet[2574]: E0317 17:42:19.155451 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.155667 kubelet[2574]: E0317 17:42:19.155651 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.155667 kubelet[2574]: W0317 17:42:19.155663 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.155727 kubelet[2574]: E0317 17:42:19.155672 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.155893 kubelet[2574]: E0317 17:42:19.155878 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.155893 kubelet[2574]: W0317 17:42:19.155889 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.155943 kubelet[2574]: E0317 17:42:19.155897 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.156136 kubelet[2574]: E0317 17:42:19.156121 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.156136 kubelet[2574]: W0317 17:42:19.156132 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.156194 kubelet[2574]: E0317 17:42:19.156140 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.156374 kubelet[2574]: E0317 17:42:19.156358 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.156374 kubelet[2574]: W0317 17:42:19.156369 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.156425 kubelet[2574]: E0317 17:42:19.156377 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.156596 kubelet[2574]: E0317 17:42:19.156581 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.156596 kubelet[2574]: W0317 17:42:19.156592 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.156654 kubelet[2574]: E0317 17:42:19.156599 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.189528 kubelet[2574]: E0317 17:42:19.189374 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.189528 kubelet[2574]: W0317 17:42:19.189400 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.189528 kubelet[2574]: E0317 17:42:19.189422 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.189726 kubelet[2574]: E0317 17:42:19.189658 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.189726 kubelet[2574]: W0317 17:42:19.189669 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.189726 kubelet[2574]: E0317 17:42:19.189679 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.190027 kubelet[2574]: E0317 17:42:19.189905 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.190027 kubelet[2574]: W0317 17:42:19.189929 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.190027 kubelet[2574]: E0317 17:42:19.189942 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.190353 kubelet[2574]: E0317 17:42:19.190215 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.190353 kubelet[2574]: W0317 17:42:19.190227 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.190353 kubelet[2574]: E0317 17:42:19.190245 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.190537 kubelet[2574]: E0317 17:42:19.190504 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.190537 kubelet[2574]: W0317 17:42:19.190520 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.190537 kubelet[2574]: E0317 17:42:19.190535 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.190768 kubelet[2574]: E0317 17:42:19.190750 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.190768 kubelet[2574]: W0317 17:42:19.190765 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.190831 kubelet[2574]: E0317 17:42:19.190782 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.191196 kubelet[2574]: E0317 17:42:19.191058 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.191196 kubelet[2574]: W0317 17:42:19.191111 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.191196 kubelet[2574]: E0317 17:42:19.191141 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.191486 kubelet[2574]: E0317 17:42:19.191469 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.191486 kubelet[2574]: W0317 17:42:19.191481 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.191545 kubelet[2574]: E0317 17:42:19.191518 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.191793 kubelet[2574]: E0317 17:42:19.191777 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.191793 kubelet[2574]: W0317 17:42:19.191789 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.191871 kubelet[2574]: E0317 17:42:19.191828 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.192016 kubelet[2574]: E0317 17:42:19.192002 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.192016 kubelet[2574]: W0317 17:42:19.192012 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.192101 kubelet[2574]: E0317 17:42:19.192027 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.192253 kubelet[2574]: E0317 17:42:19.192234 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.192253 kubelet[2574]: W0317 17:42:19.192247 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.192335 kubelet[2574]: E0317 17:42:19.192264 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.192500 kubelet[2574]: E0317 17:42:19.192484 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.192500 kubelet[2574]: W0317 17:42:19.192495 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.192566 kubelet[2574]: E0317 17:42:19.192509 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.192720 kubelet[2574]: E0317 17:42:19.192707 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.192720 kubelet[2574]: W0317 17:42:19.192718 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.192776 kubelet[2574]: E0317 17:42:19.192732 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.192938 kubelet[2574]: E0317 17:42:19.192926 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.192938 kubelet[2574]: W0317 17:42:19.192935 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.192992 kubelet[2574]: E0317 17:42:19.192948 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.193219 kubelet[2574]: E0317 17:42:19.193206 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.193219 kubelet[2574]: W0317 17:42:19.193215 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.193302 kubelet[2574]: E0317 17:42:19.193243 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.193432 kubelet[2574]: E0317 17:42:19.193419 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.193432 kubelet[2574]: W0317 17:42:19.193428 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.193503 kubelet[2574]: E0317 17:42:19.193457 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.193635 kubelet[2574]: E0317 17:42:19.193622 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.193635 kubelet[2574]: W0317 17:42:19.193631 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.193700 kubelet[2574]: E0317 17:42:19.193645 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.193887 kubelet[2574]: E0317 17:42:19.193866 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:19.193887 kubelet[2574]: W0317 17:42:19.193882 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:19.193955 kubelet[2574]: E0317 17:42:19.193897 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:19.934652 kubelet[2574]: E0317 17:42:19.934583 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:20.058777 kubelet[2574]: I0317 17:42:20.058744 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:42:20.059279 kubelet[2574]: E0317 17:42:20.059119 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:20.062503 kubelet[2574]: E0317 17:42:20.062470 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.062503 kubelet[2574]: W0317 17:42:20.062487 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.062503 kubelet[2574]: E0317 17:42:20.062505 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.062737 kubelet[2574]: E0317 17:42:20.062726 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.062775 kubelet[2574]: W0317 17:42:20.062737 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.062775 kubelet[2574]: E0317 17:42:20.062748 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.062993 kubelet[2574]: E0317 17:42:20.062966 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.062993 kubelet[2574]: W0317 17:42:20.062980 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.062993 kubelet[2574]: E0317 17:42:20.062991 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.063255 kubelet[2574]: E0317 17:42:20.063229 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.063255 kubelet[2574]: W0317 17:42:20.063243 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.063255 kubelet[2574]: E0317 17:42:20.063254 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.063524 kubelet[2574]: E0317 17:42:20.063498 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.063524 kubelet[2574]: W0317 17:42:20.063511 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.063524 kubelet[2574]: E0317 17:42:20.063522 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.063765 kubelet[2574]: E0317 17:42:20.063739 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.063765 kubelet[2574]: W0317 17:42:20.063752 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.063765 kubelet[2574]: E0317 17:42:20.063763 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.064036 kubelet[2574]: E0317 17:42:20.064010 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.064036 kubelet[2574]: W0317 17:42:20.064023 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.064036 kubelet[2574]: E0317 17:42:20.064034 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.064386 kubelet[2574]: E0317 17:42:20.064343 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.064386 kubelet[2574]: W0317 17:42:20.064375 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.064454 kubelet[2574]: E0317 17:42:20.064398 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.064696 kubelet[2574]: E0317 17:42:20.064681 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.064696 kubelet[2574]: W0317 17:42:20.064691 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.064747 kubelet[2574]: E0317 17:42:20.064700 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.064975 kubelet[2574]: E0317 17:42:20.064957 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.064975 kubelet[2574]: W0317 17:42:20.064971 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.065087 kubelet[2574]: E0317 17:42:20.064983 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.065277 kubelet[2574]: E0317 17:42:20.065259 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.065277 kubelet[2574]: W0317 17:42:20.065272 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.065362 kubelet[2574]: E0317 17:42:20.065284 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.065533 kubelet[2574]: E0317 17:42:20.065516 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.065533 kubelet[2574]: W0317 17:42:20.065528 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.065624 kubelet[2574]: E0317 17:42:20.065539 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.065800 kubelet[2574]: E0317 17:42:20.065784 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.065800 kubelet[2574]: W0317 17:42:20.065797 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.065888 kubelet[2574]: E0317 17:42:20.065808 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.066034 kubelet[2574]: E0317 17:42:20.066016 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.066034 kubelet[2574]: W0317 17:42:20.066028 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.066118 kubelet[2574]: E0317 17:42:20.066040 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.066311 kubelet[2574]: E0317 17:42:20.066294 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.066311 kubelet[2574]: W0317 17:42:20.066307 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.066412 kubelet[2574]: E0317 17:42:20.066318 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.096784 kubelet[2574]: E0317 17:42:20.096733 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.096784 kubelet[2574]: W0317 17:42:20.096763 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.096784 kubelet[2574]: E0317 17:42:20.096789 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.097156 kubelet[2574]: E0317 17:42:20.097128 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.097156 kubelet[2574]: W0317 17:42:20.097145 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.097235 kubelet[2574]: E0317 17:42:20.097167 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.097530 kubelet[2574]: E0317 17:42:20.097499 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.097565 kubelet[2574]: W0317 17:42:20.097529 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.097565 kubelet[2574]: E0317 17:42:20.097559 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.097802 kubelet[2574]: E0317 17:42:20.097785 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.097802 kubelet[2574]: W0317 17:42:20.097798 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.097864 kubelet[2574]: E0317 17:42:20.097812 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.098045 kubelet[2574]: E0317 17:42:20.098029 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.098045 kubelet[2574]: W0317 17:42:20.098041 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.098119 kubelet[2574]: E0317 17:42:20.098057 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.098344 kubelet[2574]: E0317 17:42:20.098321 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.098344 kubelet[2574]: W0317 17:42:20.098334 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.098422 kubelet[2574]: E0317 17:42:20.098353 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.098780 kubelet[2574]: E0317 17:42:20.098750 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.098780 kubelet[2574]: W0317 17:42:20.098770 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.098844 kubelet[2574]: E0317 17:42:20.098791 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.099040 kubelet[2574]: E0317 17:42:20.099018 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.099040 kubelet[2574]: W0317 17:42:20.099032 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.099126 kubelet[2574]: E0317 17:42:20.099049 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.099316 kubelet[2574]: E0317 17:42:20.099301 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.099316 kubelet[2574]: W0317 17:42:20.099314 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.099370 kubelet[2574]: E0317 17:42:20.099329 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.099577 kubelet[2574]: E0317 17:42:20.099554 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.099577 kubelet[2574]: W0317 17:42:20.099569 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.099656 kubelet[2574]: E0317 17:42:20.099588 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.099861 kubelet[2574]: E0317 17:42:20.099842 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.099861 kubelet[2574]: W0317 17:42:20.099856 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.099982 kubelet[2574]: E0317 17:42:20.099961 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.100130 kubelet[2574]: E0317 17:42:20.100096 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.100130 kubelet[2574]: W0317 17:42:20.100111 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.100367 kubelet[2574]: E0317 17:42:20.100148 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.100367 kubelet[2574]: E0317 17:42:20.100345 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.100367 kubelet[2574]: W0317 17:42:20.100355 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.100367 kubelet[2574]: E0317 17:42:20.100379 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.100640 kubelet[2574]: E0317 17:42:20.100607 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.100640 kubelet[2574]: W0317 17:42:20.100637 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.100731 kubelet[2574]: E0317 17:42:20.100652 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.100963 kubelet[2574]: E0317 17:42:20.100938 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.100963 kubelet[2574]: W0317 17:42:20.100959 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.101052 kubelet[2574]: E0317 17:42:20.100985 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.101424 kubelet[2574]: E0317 17:42:20.101397 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.101424 kubelet[2574]: W0317 17:42:20.101413 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.101499 kubelet[2574]: E0317 17:42:20.101432 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.101696 kubelet[2574]: E0317 17:42:20.101677 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.101696 kubelet[2574]: W0317 17:42:20.101694 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.101779 kubelet[2574]: E0317 17:42:20.101710 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:20.101960 kubelet[2574]: E0317 17:42:20.101945 2574 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:42:20.101960 kubelet[2574]: W0317 17:42:20.101958 2574 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:42:20.102045 kubelet[2574]: E0317 17:42:20.101969 2574 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:42:21.934869 kubelet[2574]: E0317 17:42:21.934807 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:22.307930 containerd[1471]: time="2025-03-17T17:42:22.307848039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:22.308713 containerd[1471]: time="2025-03-17T17:42:22.308632432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 17 17:42:22.309955 containerd[1471]: time="2025-03-17T17:42:22.309904128Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:22.312362 containerd[1471]: time="2025-03-17T17:42:22.312302025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:22.313090 containerd[1471]: time="2025-03-17T17:42:22.313038781Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 3.657283499s" Mar 17 17:42:22.313151 containerd[1471]: time="2025-03-17T17:42:22.313098011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:42:22.315307 containerd[1471]: time="2025-03-17T17:42:22.315277068Z" level=info msg="CreateContainer within sandbox \"85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:42:22.417385 containerd[1471]: time="2025-03-17T17:42:22.417320243Z" level=info msg="CreateContainer within sandbox \"85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500\"" Mar 17 17:42:22.419110 containerd[1471]: time="2025-03-17T17:42:22.417987580Z" level=info msg="StartContainer for \"71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500\"" Mar 17 17:42:22.447851 systemd[1]: run-containerd-runc-k8s.io-71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500-runc.eyti8o.mount: Deactivated successfully. Mar 17 17:42:22.457209 systemd[1]: Started cri-containerd-71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500.scope - libcontainer container 71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500. Mar 17 17:42:22.503317 systemd[1]: cri-containerd-71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500.scope: Deactivated successfully. Mar 17 17:42:22.559209 containerd[1471]: time="2025-03-17T17:42:22.558959286Z" level=info msg="StartContainer for \"71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500\" returns successfully" Mar 17 17:42:22.997281 containerd[1471]: time="2025-03-17T17:42:22.997206076Z" level=info msg="shim disconnected" id=71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500 namespace=k8s.io Mar 17 17:42:22.997281 containerd[1471]: time="2025-03-17T17:42:22.997269443Z" level=warning msg="cleaning up after shim disconnected" id=71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500 namespace=k8s.io Mar 17 17:42:22.997281 containerd[1471]: time="2025-03-17T17:42:22.997279124Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:42:23.065594 kubelet[2574]: E0317 17:42:23.065558 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:23.066980 containerd[1471]: time="2025-03-17T17:42:23.066922130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:42:23.082991 kubelet[2574]: I0317 17:42:23.082903 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6478f5dd9d-wxsjc" podStartSLOduration=5.485103726 podStartE2EDuration="8.082882848s" podCreationTimestamp="2025-03-17 17:42:15 +0000 UTC" firstStartedPulling="2025-03-17 17:42:16.057738049 +0000 UTC m=+13.218159537" lastFinishedPulling="2025-03-17 17:42:18.655517151 +0000 UTC m=+15.815938659" observedRunningTime="2025-03-17 17:42:19.27484329 +0000 UTC m=+16.435264778" watchObservedRunningTime="2025-03-17 17:42:23.082882848 +0000 UTC m=+20.243304336" Mar 17 17:42:23.412637 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71882f0a0ad5b58d718b9e9d773e628a0bc342082225facea0335d5b67698500-rootfs.mount: Deactivated successfully. Mar 17 17:42:23.935456 kubelet[2574]: E0317 17:42:23.935381 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:25.934892 kubelet[2574]: E0317 17:42:25.934830 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:27.893079 containerd[1471]: time="2025-03-17T17:42:27.892984275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:27.893957 containerd[1471]: time="2025-03-17T17:42:27.893887320Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:42:27.895302 containerd[1471]: time="2025-03-17T17:42:27.895249332Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:27.897871 containerd[1471]: time="2025-03-17T17:42:27.897823408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:27.898643 containerd[1471]: time="2025-03-17T17:42:27.898606603Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 4.831631036s" Mar 17 17:42:27.898643 containerd[1471]: time="2025-03-17T17:42:27.898641493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:42:27.900755 containerd[1471]: time="2025-03-17T17:42:27.900719376Z" level=info msg="CreateContainer within sandbox \"85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:42:27.917137 containerd[1471]: time="2025-03-17T17:42:27.917097916Z" level=info msg="CreateContainer within sandbox \"85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49\"" Mar 17 17:42:27.917622 containerd[1471]: time="2025-03-17T17:42:27.917594189Z" level=info msg="StartContainer for \"81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49\"" Mar 17 17:42:27.934793 kubelet[2574]: E0317 17:42:27.934712 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:27.952222 systemd[1]: Started cri-containerd-81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49.scope - libcontainer container 81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49. Mar 17 17:42:27.986453 containerd[1471]: time="2025-03-17T17:42:27.986404576Z" level=info msg="StartContainer for \"81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49\" returns successfully" Mar 17 17:42:28.076566 kubelet[2574]: E0317 17:42:28.076466 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:29.002537 systemd[1]: cri-containerd-81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49.scope: Deactivated successfully. Mar 17 17:42:29.029346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49-rootfs.mount: Deactivated successfully. Mar 17 17:42:29.063758 kubelet[2574]: I0317 17:42:29.063709 2574 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Mar 17 17:42:29.178098 kubelet[2574]: E0317 17:42:29.177629 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:29.865419 containerd[1471]: time="2025-03-17T17:42:29.865333544Z" level=info msg="shim disconnected" id=81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49 namespace=k8s.io Mar 17 17:42:29.865419 containerd[1471]: time="2025-03-17T17:42:29.865409556Z" level=warning msg="cleaning up after shim disconnected" id=81925cc6dd73ae778647d7789431f56361b44836499907225b027e9238b3de49 namespace=k8s.io Mar 17 17:42:29.865419 containerd[1471]: time="2025-03-17T17:42:29.865422661Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:42:29.866946 systemd[1]: Created slice kubepods-burstable-pod5d480ab2_f501_4510_9ec1_d051e760e88d.slice - libcontainer container kubepods-burstable-pod5d480ab2_f501_4510_9ec1_d051e760e88d.slice. Mar 17 17:42:29.875358 systemd[1]: Created slice kubepods-besteffort-podd2c44e13_0cac_42de_9897_344a553902e4.slice - libcontainer container kubepods-besteffort-podd2c44e13_0cac_42de_9897_344a553902e4.slice. Mar 17 17:42:29.879804 kubelet[2574]: I0317 17:42:29.879407 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbqwm\" (UniqueName: \"kubernetes.io/projected/7b8c3717-fa53-41b7-bf24-e6ae52b8b921-kube-api-access-gbqwm\") pod \"coredns-6f6b679f8f-hg76m\" (UID: \"7b8c3717-fa53-41b7-bf24-e6ae52b8b921\") " pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:29.879804 kubelet[2574]: I0317 17:42:29.879441 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmh6m\" (UniqueName: \"kubernetes.io/projected/e61bbec9-65b1-4228-aa40-669eba7841ea-kube-api-access-cmh6m\") pod \"calico-apiserver-b6b5f678f-lhw82\" (UID: \"e61bbec9-65b1-4228-aa40-669eba7841ea\") " pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:29.879804 kubelet[2574]: I0317 17:42:29.879459 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5d480ab2-f501-4510-9ec1-d051e760e88d-config-volume\") pod \"coredns-6f6b679f8f-b7tm9\" (UID: \"5d480ab2-f501-4510-9ec1-d051e760e88d\") " pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:29.879804 kubelet[2574]: I0317 17:42:29.879476 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58c8m\" (UniqueName: \"kubernetes.io/projected/5d480ab2-f501-4510-9ec1-d051e760e88d-kube-api-access-58c8m\") pod \"coredns-6f6b679f8f-b7tm9\" (UID: \"5d480ab2-f501-4510-9ec1-d051e760e88d\") " pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:29.879804 kubelet[2574]: I0317 17:42:29.879494 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e61bbec9-65b1-4228-aa40-669eba7841ea-calico-apiserver-certs\") pod \"calico-apiserver-b6b5f678f-lhw82\" (UID: \"e61bbec9-65b1-4228-aa40-669eba7841ea\") " pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:29.880508 kubelet[2574]: I0317 17:42:29.879511 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d2c44e13-0cac-42de-9897-344a553902e4-tigera-ca-bundle\") pod \"calico-kube-controllers-8b46cd865-7kxzr\" (UID: \"d2c44e13-0cac-42de-9897-344a553902e4\") " pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:29.880508 kubelet[2574]: I0317 17:42:29.879529 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e2974f51-cab2-451b-8e3d-274fab1b872e-calico-apiserver-certs\") pod \"calico-apiserver-b6b5f678f-v8plm\" (UID: \"e2974f51-cab2-451b-8e3d-274fab1b872e\") " pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:29.880508 kubelet[2574]: I0317 17:42:29.879545 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b8c3717-fa53-41b7-bf24-e6ae52b8b921-config-volume\") pod \"coredns-6f6b679f8f-hg76m\" (UID: \"7b8c3717-fa53-41b7-bf24-e6ae52b8b921\") " pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:29.880508 kubelet[2574]: I0317 17:42:29.879562 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbwn2\" (UniqueName: \"kubernetes.io/projected/e2974f51-cab2-451b-8e3d-274fab1b872e-kube-api-access-sbwn2\") pod \"calico-apiserver-b6b5f678f-v8plm\" (UID: \"e2974f51-cab2-451b-8e3d-274fab1b872e\") " pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:29.880508 kubelet[2574]: I0317 17:42:29.879578 2574 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq9cj\" (UniqueName: \"kubernetes.io/projected/d2c44e13-0cac-42de-9897-344a553902e4-kube-api-access-pq9cj\") pod \"calico-kube-controllers-8b46cd865-7kxzr\" (UID: \"d2c44e13-0cac-42de-9897-344a553902e4\") " pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:29.883214 systemd[1]: Created slice kubepods-burstable-pod7b8c3717_fa53_41b7_bf24_e6ae52b8b921.slice - libcontainer container kubepods-burstable-pod7b8c3717_fa53_41b7_bf24_e6ae52b8b921.slice. Mar 17 17:42:29.888390 systemd[1]: Created slice kubepods-besteffort-pode61bbec9_65b1_4228_aa40_669eba7841ea.slice - libcontainer container kubepods-besteffort-pode61bbec9_65b1_4228_aa40_669eba7841ea.slice. Mar 17 17:42:29.892827 systemd[1]: Created slice kubepods-besteffort-pode2974f51_cab2_451b_8e3d_274fab1b872e.slice - libcontainer container kubepods-besteffort-pode2974f51_cab2_451b_8e3d_274fab1b872e.slice. Mar 17 17:42:29.939968 systemd[1]: Created slice kubepods-besteffort-podc4ec38d1_6c2f_488c_8f17_eb6d6aa4990d.slice - libcontainer container kubepods-besteffort-podc4ec38d1_6c2f_488c_8f17_eb6d6aa4990d.slice. Mar 17 17:42:29.956306 containerd[1471]: time="2025-03-17T17:42:29.956256388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:0,}" Mar 17 17:42:30.172720 kubelet[2574]: E0317 17:42:30.172574 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:30.173796 containerd[1471]: time="2025-03-17T17:42:30.173650274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:0,}" Mar 17 17:42:30.180448 kubelet[2574]: E0317 17:42:30.180421 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:30.181044 containerd[1471]: time="2025-03-17T17:42:30.181015354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:42:30.181260 containerd[1471]: time="2025-03-17T17:42:30.181222176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:0,}" Mar 17 17:42:30.185506 kubelet[2574]: E0317 17:42:30.185480 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:30.185965 containerd[1471]: time="2025-03-17T17:42:30.185801604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:0,}" Mar 17 17:42:30.191635 containerd[1471]: time="2025-03-17T17:42:30.191601572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:42:30.195163 containerd[1471]: time="2025-03-17T17:42:30.195131963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:42:31.839090 containerd[1471]: time="2025-03-17T17:42:31.839009223Z" level=error msg="Failed to destroy network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:31.839572 containerd[1471]: time="2025-03-17T17:42:31.839406683Z" level=error msg="encountered an error cleaning up failed sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:31.839572 containerd[1471]: time="2025-03-17T17:42:31.839460701Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:31.839716 kubelet[2574]: E0317 17:42:31.839666 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:31.840057 kubelet[2574]: E0317 17:42:31.839741 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:31.840057 kubelet[2574]: E0317 17:42:31.839760 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:31.840057 kubelet[2574]: E0317 17:42:31.839798 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:31.841336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd-shm.mount: Deactivated successfully. Mar 17 17:42:32.026656 containerd[1471]: time="2025-03-17T17:42:32.026593911Z" level=error msg="Failed to destroy network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.028991 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c-shm.mount: Deactivated successfully. Mar 17 17:42:32.029385 containerd[1471]: time="2025-03-17T17:42:32.029354425Z" level=error msg="encountered an error cleaning up failed sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.029483 containerd[1471]: time="2025-03-17T17:42:32.029457078Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.029742 kubelet[2574]: E0317 17:42:32.029699 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.029873 kubelet[2574]: E0317 17:42:32.029759 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:32.029873 kubelet[2574]: E0317 17:42:32.029780 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:32.029873 kubelet[2574]: E0317 17:42:32.029821 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b7tm9" podUID="5d480ab2-f501-4510-9ec1-d051e760e88d" Mar 17 17:42:32.190221 kubelet[2574]: I0317 17:42:32.190100 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c" Mar 17 17:42:32.190901 containerd[1471]: time="2025-03-17T17:42:32.190851997Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:42:32.191186 containerd[1471]: time="2025-03-17T17:42:32.191158094Z" level=info msg="Ensure that sandbox edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c in task-service has been cleanup successfully" Mar 17 17:42:32.191532 containerd[1471]: time="2025-03-17T17:42:32.191509250Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:42:32.191584 containerd[1471]: time="2025-03-17T17:42:32.191529580Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:42:32.193521 systemd[1]: run-netns-cni\x2d357645ac\x2dc064\x2d1176\x2d445a\x2d36a5e368259d.mount: Deactivated successfully. Mar 17 17:42:32.195391 kubelet[2574]: I0317 17:42:32.195338 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd" Mar 17 17:42:32.196103 containerd[1471]: time="2025-03-17T17:42:32.195866541Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:42:32.196182 containerd[1471]: time="2025-03-17T17:42:32.196058352Z" level=info msg="Ensure that sandbox 9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd in task-service has been cleanup successfully" Mar 17 17:42:32.198225 systemd[1]: run-netns-cni\x2dc00a30f2\x2df137\x2d7942\x2d4363\x2dc8e838b3c5af.mount: Deactivated successfully. Mar 17 17:42:32.198999 containerd[1471]: time="2025-03-17T17:42:32.198845700Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:42:32.198999 containerd[1471]: time="2025-03-17T17:42:32.198874547Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:42:32.204091 containerd[1471]: time="2025-03-17T17:42:32.204024750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:1,}" Mar 17 17:42:32.204342 kubelet[2574]: E0317 17:42:32.204316 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:32.204629 containerd[1471]: time="2025-03-17T17:42:32.204585612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:1,}" Mar 17 17:42:32.560694 containerd[1471]: time="2025-03-17T17:42:32.560631969Z" level=error msg="Failed to destroy network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.561163 containerd[1471]: time="2025-03-17T17:42:32.561126319Z" level=error msg="encountered an error cleaning up failed sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.561225 containerd[1471]: time="2025-03-17T17:42:32.561193032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.561468 kubelet[2574]: E0317 17:42:32.561428 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.561522 kubelet[2574]: E0317 17:42:32.561496 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:32.561550 kubelet[2574]: E0317 17:42:32.561518 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:32.561590 kubelet[2574]: E0317 17:42:32.561562 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" podUID="d2c44e13-0cac-42de-9897-344a553902e4" Mar 17 17:42:32.686229 containerd[1471]: time="2025-03-17T17:42:32.686150532Z" level=error msg="Failed to destroy network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.686724 containerd[1471]: time="2025-03-17T17:42:32.686672378Z" level=error msg="encountered an error cleaning up failed sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.686866 containerd[1471]: time="2025-03-17T17:42:32.686762946Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.687109 kubelet[2574]: E0317 17:42:32.687026 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.687169 kubelet[2574]: E0317 17:42:32.687125 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:32.687169 kubelet[2574]: E0317 17:42:32.687158 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:32.687270 kubelet[2574]: E0317 17:42:32.687217 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" podUID="e61bbec9-65b1-4228-aa40-669eba7841ea" Mar 17 17:42:32.717404 containerd[1471]: time="2025-03-17T17:42:32.717334132Z" level=error msg="Failed to destroy network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.717960 containerd[1471]: time="2025-03-17T17:42:32.717909082Z" level=error msg="encountered an error cleaning up failed sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.718026 containerd[1471]: time="2025-03-17T17:42:32.718003289Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.718348 kubelet[2574]: E0317 17:42:32.718305 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:32.718408 kubelet[2574]: E0317 17:42:32.718376 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:32.718408 kubelet[2574]: E0317 17:42:32.718398 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:32.718477 kubelet[2574]: E0317 17:42:32.718447 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" podUID="e2974f51-cab2-451b-8e3d-274fab1b872e" Mar 17 17:42:33.059026 containerd[1471]: time="2025-03-17T17:42:33.058979053Z" level=error msg="Failed to destroy network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.059478 containerd[1471]: time="2025-03-17T17:42:33.059395608Z" level=error msg="encountered an error cleaning up failed sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.059478 containerd[1471]: time="2025-03-17T17:42:33.059446709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.059695 kubelet[2574]: E0317 17:42:33.059642 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.059953 kubelet[2574]: E0317 17:42:33.059719 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:33.059953 kubelet[2574]: E0317 17:42:33.059749 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:33.059953 kubelet[2574]: E0317 17:42:33.059840 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hg76m" podUID="7b8c3717-fa53-41b7-bf24-e6ae52b8b921" Mar 17 17:42:33.195975 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151-shm.mount: Deactivated successfully. Mar 17 17:42:33.196105 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db-shm.mount: Deactivated successfully. Mar 17 17:42:33.201836 kubelet[2574]: I0317 17:42:33.201735 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70" Mar 17 17:42:33.207420 containerd[1471]: time="2025-03-17T17:42:33.206900947Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:42:33.207420 containerd[1471]: time="2025-03-17T17:42:33.207181684Z" level=info msg="Ensure that sandbox 1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70 in task-service has been cleanup successfully" Mar 17 17:42:33.209765 kubelet[2574]: I0317 17:42:33.209123 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151" Mar 17 17:42:33.209685 systemd[1]: run-netns-cni\x2d0a7d90ed\x2df855\x2dceb5\x2d6b3e\x2d979c81a8a81f.mount: Deactivated successfully. Mar 17 17:42:33.210190 containerd[1471]: time="2025-03-17T17:42:33.210157686Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:42:33.210553 containerd[1471]: time="2025-03-17T17:42:33.210527929Z" level=info msg="Ensure that sandbox 029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151 in task-service has been cleanup successfully" Mar 17 17:42:33.211498 containerd[1471]: time="2025-03-17T17:42:33.211472971Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:42:33.212217 containerd[1471]: time="2025-03-17T17:42:33.212192135Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:42:33.213006 containerd[1471]: time="2025-03-17T17:42:33.212967690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:42:33.213722 containerd[1471]: time="2025-03-17T17:42:33.213230952Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:42:33.213854 containerd[1471]: time="2025-03-17T17:42:33.213829637Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:42:33.214468 containerd[1471]: time="2025-03-17T17:42:33.214442682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:42:33.215676 kubelet[2574]: I0317 17:42:33.215254 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5" Mar 17 17:42:33.215374 systemd[1]: run-netns-cni\x2dc48c166d\x2d41d5\x2d4cec\x2deff3\x2df396a60ee778.mount: Deactivated successfully. Mar 17 17:42:33.216497 containerd[1471]: time="2025-03-17T17:42:33.216469194Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:42:33.218574 kubelet[2574]: I0317 17:42:33.218545 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db" Mar 17 17:42:33.220050 containerd[1471]: time="2025-03-17T17:42:33.219639101Z" level=info msg="Ensure that sandbox 2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5 in task-service has been cleanup successfully" Mar 17 17:42:33.221350 containerd[1471]: time="2025-03-17T17:42:33.219746564Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:42:33.221350 containerd[1471]: time="2025-03-17T17:42:33.221182087Z" level=info msg="Ensure that sandbox c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db in task-service has been cleanup successfully" Mar 17 17:42:33.221666 containerd[1471]: time="2025-03-17T17:42:33.221640765Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:42:33.221794 containerd[1471]: time="2025-03-17T17:42:33.221755021Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:42:33.221953 containerd[1471]: time="2025-03-17T17:42:33.221736384Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:42:33.222027 containerd[1471]: time="2025-03-17T17:42:33.222014866Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:42:33.224641 kubelet[2574]: E0317 17:42:33.224603 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:33.224735 systemd[1]: run-netns-cni\x2d7d243f1e\x2dbf52\x2dd0a6\x2d3b9a\x2d659b743f9b7c.mount: Deactivated successfully. Mar 17 17:42:33.225029 systemd[1]: run-netns-cni\x2d10d66c35\x2dc319\x2dfb52\x2d7278\x2d90b4049a788b.mount: Deactivated successfully. Mar 17 17:42:33.226919 containerd[1471]: time="2025-03-17T17:42:33.225307466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:1,}" Mar 17 17:42:33.226919 containerd[1471]: time="2025-03-17T17:42:33.225601518Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:1,}" Mar 17 17:42:33.282272 containerd[1471]: time="2025-03-17T17:42:33.282125086Z" level=error msg="Failed to destroy network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.282709 containerd[1471]: time="2025-03-17T17:42:33.282682330Z" level=error msg="encountered an error cleaning up failed sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.282934 containerd[1471]: time="2025-03-17T17:42:33.282873518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.283532 kubelet[2574]: E0317 17:42:33.283336 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.283532 kubelet[2574]: E0317 17:42:33.283402 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:33.283532 kubelet[2574]: E0317 17:42:33.283428 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:33.283681 kubelet[2574]: E0317 17:42:33.283480 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:33.313578 containerd[1471]: time="2025-03-17T17:42:33.313101145Z" level=error msg="Failed to destroy network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.315337 containerd[1471]: time="2025-03-17T17:42:33.315215972Z" level=error msg="encountered an error cleaning up failed sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.315548 containerd[1471]: time="2025-03-17T17:42:33.315519773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.316298 kubelet[2574]: E0317 17:42:33.315941 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.316298 kubelet[2574]: E0317 17:42:33.316006 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:33.316298 kubelet[2574]: E0317 17:42:33.316026 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:33.316591 kubelet[2574]: E0317 17:42:33.316082 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b7tm9" podUID="5d480ab2-f501-4510-9ec1-d051e760e88d" Mar 17 17:42:33.384515 containerd[1471]: time="2025-03-17T17:42:33.384436142Z" level=error msg="Failed to destroy network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.384995 containerd[1471]: time="2025-03-17T17:42:33.384832297Z" level=error msg="encountered an error cleaning up failed sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.384995 containerd[1471]: time="2025-03-17T17:42:33.384926854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.385261 kubelet[2574]: E0317 17:42:33.385216 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.385459 kubelet[2574]: E0317 17:42:33.385442 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:33.385566 kubelet[2574]: E0317 17:42:33.385547 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:33.385801 kubelet[2574]: E0317 17:42:33.385776 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" podUID="e61bbec9-65b1-4228-aa40-669eba7841ea" Mar 17 17:42:33.388216 containerd[1471]: time="2025-03-17T17:42:33.388167140Z" level=error msg="Failed to destroy network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.388774 containerd[1471]: time="2025-03-17T17:42:33.388722360Z" level=error msg="encountered an error cleaning up failed sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.388864 containerd[1471]: time="2025-03-17T17:42:33.388806034Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.389191 kubelet[2574]: E0317 17:42:33.389025 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.389191 kubelet[2574]: E0317 17:42:33.389104 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:33.389191 kubelet[2574]: E0317 17:42:33.389124 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:33.389409 kubelet[2574]: E0317 17:42:33.389177 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" podUID="e2974f51-cab2-451b-8e3d-274fab1b872e" Mar 17 17:42:33.390708 containerd[1471]: time="2025-03-17T17:42:33.390681649Z" level=error msg="Failed to destroy network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.392130 containerd[1471]: time="2025-03-17T17:42:33.392092633Z" level=error msg="encountered an error cleaning up failed sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.392208 containerd[1471]: time="2025-03-17T17:42:33.392154145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.392398 kubelet[2574]: E0317 17:42:33.392371 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.392434 kubelet[2574]: E0317 17:42:33.392407 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:33.392434 kubelet[2574]: E0317 17:42:33.392425 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:33.392478 kubelet[2574]: E0317 17:42:33.392457 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" podUID="d2c44e13-0cac-42de-9897-344a553902e4" Mar 17 17:42:33.399001 containerd[1471]: time="2025-03-17T17:42:33.398957847Z" level=error msg="Failed to destroy network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.399342 containerd[1471]: time="2025-03-17T17:42:33.399311548Z" level=error msg="encountered an error cleaning up failed sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.399383 containerd[1471]: time="2025-03-17T17:42:33.399363881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.399576 kubelet[2574]: E0317 17:42:33.399535 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:33.399712 kubelet[2574]: E0317 17:42:33.399589 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:33.399712 kubelet[2574]: E0317 17:42:33.399616 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:33.399712 kubelet[2574]: E0317 17:42:33.399687 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hg76m" podUID="7b8c3717-fa53-41b7-bf24-e6ae52b8b921" Mar 17 17:42:33.568749 systemd[1]: Started sshd@8-10.0.0.46:22-10.0.0.1:44268.service - OpenSSH per-connection server daemon (10.0.0.1:44268). Mar 17 17:42:33.629262 sshd[3812]: Accepted publickey for core from 10.0.0.1 port 44268 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:42:33.631130 sshd-session[3812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:33.636480 systemd-logind[1456]: New session 8 of user core. Mar 17 17:42:33.645200 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:42:33.761792 sshd[3814]: Connection closed by 10.0.0.1 port 44268 Mar 17 17:42:33.762179 sshd-session[3812]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:33.766336 systemd[1]: sshd@8-10.0.0.46:22-10.0.0.1:44268.service: Deactivated successfully. Mar 17 17:42:33.768344 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:42:33.768926 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:42:33.769818 systemd-logind[1456]: Removed session 8. Mar 17 17:42:34.195276 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5-shm.mount: Deactivated successfully. Mar 17 17:42:34.195391 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d-shm.mount: Deactivated successfully. Mar 17 17:42:34.226258 kubelet[2574]: I0317 17:42:34.226223 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d" Mar 17 17:42:34.228365 kubelet[2574]: I0317 17:42:34.228330 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791" Mar 17 17:42:34.231883 kubelet[2574]: I0317 17:42:34.231491 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042" Mar 17 17:42:34.232021 containerd[1471]: time="2025-03-17T17:42:34.231980480Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:42:34.232432 containerd[1471]: time="2025-03-17T17:42:34.232250503Z" level=info msg="Ensure that sandbox 3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d in task-service has been cleanup successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232470669Z" level=info msg="TearDown network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232491410Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" returns successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232510448Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232542681Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232689942Z" level=info msg="Ensure that sandbox 752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791 in task-service has been cleanup successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232800301Z" level=info msg="Ensure that sandbox 537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042 in task-service has been cleanup successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232963072Z" level=info msg="TearDown network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232977932Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.233053462Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.233077218Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:42:34.236087 containerd[1471]: time="2025-03-17T17:42:34.232978122Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" returns successfully" Mar 17 17:42:34.235806 systemd[1]: run-netns-cni\x2d91e22db2\x2d7cfe\x2db1e0\x2d77ae\x2d4b0647c5076b.mount: Deactivated successfully. Mar 17 17:42:34.236837 containerd[1471]: time="2025-03-17T17:42:34.236494736Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:42:34.236837 containerd[1471]: time="2025-03-17T17:42:34.236604182Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:42:34.236837 containerd[1471]: time="2025-03-17T17:42:34.236618160Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:42:34.236837 containerd[1471]: time="2025-03-17T17:42:34.236798958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:2,}" Mar 17 17:42:34.235931 systemd[1]: run-netns-cni\x2d5869e298\x2dcb41\x2d016e\x2d5358\x2db2f432879b1a.mount: Deactivated successfully. Mar 17 17:42:34.236029 systemd[1]: run-netns-cni\x2d6380d596\x2d772f\x2dfa5d\x2d3e6a\x2d43beda5ca710.mount: Deactivated successfully. Mar 17 17:42:34.243801 containerd[1471]: time="2025-03-17T17:42:34.238361477Z" level=info msg="TearDown network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" successfully" Mar 17 17:42:34.243801 containerd[1471]: time="2025-03-17T17:42:34.238386737Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" returns successfully" Mar 17 17:42:34.243801 containerd[1471]: time="2025-03-17T17:42:34.239152291Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:42:34.243801 containerd[1471]: time="2025-03-17T17:42:34.239206939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:42:34.243801 containerd[1471]: time="2025-03-17T17:42:34.239356495Z" level=info msg="Ensure that sandbox a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5 in task-service has been cleanup successfully" Mar 17 17:42:34.244030 containerd[1471]: time="2025-03-17T17:42:34.243848157Z" level=info msg="TearDown network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" successfully" Mar 17 17:42:34.244030 containerd[1471]: time="2025-03-17T17:42:34.243879128Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" returns successfully" Mar 17 17:42:34.244119 kubelet[2574]: I0317 17:42:34.238594 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5" Mar 17 17:42:34.244166 containerd[1471]: time="2025-03-17T17:42:34.244111147Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:42:34.244312 containerd[1471]: time="2025-03-17T17:42:34.244208890Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:42:34.244312 containerd[1471]: time="2025-03-17T17:42:34.244228549Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:42:34.244639 containerd[1471]: time="2025-03-17T17:42:34.244622138Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:42:34.244801 containerd[1471]: time="2025-03-17T17:42:34.244766392Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:42:34.244801 containerd[1471]: time="2025-03-17T17:42:34.244784959Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:42:34.245815 systemd[1]: run-netns-cni\x2dd7302b89\x2d6d81\x2d8545\x2d22a1\x2dbb6f1769812f.mount: Deactivated successfully. Mar 17 17:42:34.246841 kubelet[2574]: E0317 17:42:34.246801 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:34.247040 kubelet[2574]: E0317 17:42:34.247020 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:34.247348 containerd[1471]: time="2025-03-17T17:42:34.247320753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:2,}" Mar 17 17:42:34.253926 containerd[1471]: time="2025-03-17T17:42:34.253588768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:2,}" Mar 17 17:42:34.270787 kubelet[2574]: I0317 17:42:34.270743 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae" Mar 17 17:42:34.271754 containerd[1471]: time="2025-03-17T17:42:34.271317825Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:42:34.271754 containerd[1471]: time="2025-03-17T17:42:34.271554494Z" level=info msg="Ensure that sandbox ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae in task-service has been cleanup successfully" Mar 17 17:42:34.272132 containerd[1471]: time="2025-03-17T17:42:34.272111094Z" level=info msg="TearDown network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" successfully" Mar 17 17:42:34.272221 containerd[1471]: time="2025-03-17T17:42:34.272201453Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" returns successfully" Mar 17 17:42:34.272504 containerd[1471]: time="2025-03-17T17:42:34.272487858Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:42:34.272657 containerd[1471]: time="2025-03-17T17:42:34.272641953Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:42:34.272708 containerd[1471]: time="2025-03-17T17:42:34.272694608Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:42:34.274159 kubelet[2574]: I0317 17:42:34.274139 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3" Mar 17 17:42:34.274714 containerd[1471]: time="2025-03-17T17:42:34.274695835Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:42:34.275354 containerd[1471]: time="2025-03-17T17:42:34.275335901Z" level=info msg="Ensure that sandbox ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3 in task-service has been cleanup successfully" Mar 17 17:42:34.275798 containerd[1471]: time="2025-03-17T17:42:34.275781241Z" level=info msg="TearDown network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" successfully" Mar 17 17:42:34.275869 containerd[1471]: time="2025-03-17T17:42:34.275856049Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" returns successfully" Mar 17 17:42:34.276685 systemd[1]: run-netns-cni\x2d0da6452f\x2d0905\x2db03d\x2dccf6\x2d766213126382.mount: Deactivated successfully. Mar 17 17:42:34.279648 systemd[1]: run-netns-cni\x2d77858d76\x2db716\x2d4f14\x2d53f9\x2d1ad4a9568913.mount: Deactivated successfully. Mar 17 17:42:34.283759 containerd[1471]: time="2025-03-17T17:42:34.283267434Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:42:34.283759 containerd[1471]: time="2025-03-17T17:42:34.283416419Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:42:34.283759 containerd[1471]: time="2025-03-17T17:42:34.283446910Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:42:34.286899 containerd[1471]: time="2025-03-17T17:42:34.286503703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:42:34.291368 containerd[1471]: time="2025-03-17T17:42:34.290207197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:2,}" Mar 17 17:42:35.032902 containerd[1471]: time="2025-03-17T17:42:35.032728856Z" level=error msg="Failed to destroy network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.034632 containerd[1471]: time="2025-03-17T17:42:35.034562617Z" level=error msg="encountered an error cleaning up failed sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.034809 containerd[1471]: time="2025-03-17T17:42:35.034645681Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.036809 kubelet[2574]: E0317 17:42:35.035206 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.036809 kubelet[2574]: E0317 17:42:35.035669 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:35.037445 kubelet[2574]: E0317 17:42:35.037385 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:35.045851 kubelet[2574]: E0317 17:42:35.045625 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:35.057448 containerd[1471]: time="2025-03-17T17:42:35.057388281Z" level=error msg="Failed to destroy network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.058564 containerd[1471]: time="2025-03-17T17:42:35.058501930Z" level=error msg="encountered an error cleaning up failed sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.061798 containerd[1471]: time="2025-03-17T17:42:35.061754212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.062157 containerd[1471]: time="2025-03-17T17:42:35.062013574Z" level=error msg="Failed to destroy network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.062395 kubelet[2574]: E0317 17:42:35.062352 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.062456 kubelet[2574]: E0317 17:42:35.062418 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:35.062456 kubelet[2574]: E0317 17:42:35.062444 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:35.062544 kubelet[2574]: E0317 17:42:35.062503 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b7tm9" podUID="5d480ab2-f501-4510-9ec1-d051e760e88d" Mar 17 17:42:35.062885 containerd[1471]: time="2025-03-17T17:42:35.062833293Z" level=error msg="encountered an error cleaning up failed sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.062934 containerd[1471]: time="2025-03-17T17:42:35.062886067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.064331 kubelet[2574]: E0317 17:42:35.064267 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.064331 kubelet[2574]: E0317 17:42:35.064312 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:35.064427 kubelet[2574]: E0317 17:42:35.064335 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:35.064427 kubelet[2574]: E0317 17:42:35.064375 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hg76m" podUID="7b8c3717-fa53-41b7-bf24-e6ae52b8b921" Mar 17 17:42:35.145871 containerd[1471]: time="2025-03-17T17:42:35.145809777Z" level=error msg="Failed to destroy network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.148233 containerd[1471]: time="2025-03-17T17:42:35.148203093Z" level=error msg="Failed to destroy network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.153286 containerd[1471]: time="2025-03-17T17:42:35.153247945Z" level=error msg="encountered an error cleaning up failed sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.153454 containerd[1471]: time="2025-03-17T17:42:35.153428090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.154782 kubelet[2574]: E0317 17:42:35.153900 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.154782 kubelet[2574]: E0317 17:42:35.153974 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:35.154782 kubelet[2574]: E0317 17:42:35.153998 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:35.155034 containerd[1471]: time="2025-03-17T17:42:35.153950402Z" level=error msg="Failed to destroy network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.155034 containerd[1471]: time="2025-03-17T17:42:35.154560106Z" level=error msg="encountered an error cleaning up failed sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.155034 containerd[1471]: time="2025-03-17T17:42:35.154591949Z" level=error msg="encountered an error cleaning up failed sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.155034 containerd[1471]: time="2025-03-17T17:42:35.154623982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.155034 containerd[1471]: time="2025-03-17T17:42:35.154640114Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.155318 kubelet[2574]: E0317 17:42:35.154046 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" podUID="d2c44e13-0cac-42de-9897-344a553902e4" Mar 17 17:42:35.156405 kubelet[2574]: E0317 17:42:35.156151 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.156405 kubelet[2574]: E0317 17:42:35.156192 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:35.156405 kubelet[2574]: E0317 17:42:35.156212 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:35.156579 kubelet[2574]: E0317 17:42:35.156247 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" podUID="e61bbec9-65b1-4228-aa40-669eba7841ea" Mar 17 17:42:35.156579 kubelet[2574]: E0317 17:42:35.156285 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.156579 kubelet[2574]: E0317 17:42:35.156308 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:35.156953 kubelet[2574]: E0317 17:42:35.156327 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:35.156953 kubelet[2574]: E0317 17:42:35.156368 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" podUID="e2974f51-cab2-451b-8e3d-274fab1b872e" Mar 17 17:42:35.285653 kubelet[2574]: I0317 17:42:35.285386 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf" Mar 17 17:42:35.291672 kubelet[2574]: I0317 17:42:35.291414 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df" Mar 17 17:42:35.292615 kubelet[2574]: I0317 17:42:35.292598 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32" Mar 17 17:42:35.305204 containerd[1471]: time="2025-03-17T17:42:35.305148281Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" Mar 17 17:42:35.306886 containerd[1471]: time="2025-03-17T17:42:35.306029021Z" level=info msg="Ensure that sandbox 04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf in task-service has been cleanup successfully" Mar 17 17:42:35.306886 containerd[1471]: time="2025-03-17T17:42:35.305514625Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" Mar 17 17:42:35.306886 containerd[1471]: time="2025-03-17T17:42:35.306450312Z" level=info msg="Ensure that sandbox affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df in task-service has been cleanup successfully" Mar 17 17:42:35.306886 containerd[1471]: time="2025-03-17T17:42:35.306594247Z" level=info msg="TearDown network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" successfully" Mar 17 17:42:35.306886 containerd[1471]: time="2025-03-17T17:42:35.306608445Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" returns successfully" Mar 17 17:42:35.306886 containerd[1471]: time="2025-03-17T17:42:35.306751477Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" Mar 17 17:42:35.307497 containerd[1471]: time="2025-03-17T17:42:35.306977714Z" level=info msg="Ensure that sandbox f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32 in task-service has been cleanup successfully" Mar 17 17:42:35.309004 containerd[1471]: time="2025-03-17T17:42:35.308697811Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:42:35.309004 containerd[1471]: time="2025-03-17T17:42:35.308852737Z" level=info msg="TearDown network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" successfully" Mar 17 17:42:35.309004 containerd[1471]: time="2025-03-17T17:42:35.308866363Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" returns successfully" Mar 17 17:42:35.309605 containerd[1471]: time="2025-03-17T17:42:35.309582488Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:42:35.309788 containerd[1471]: time="2025-03-17T17:42:35.309771601Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:42:35.309915 containerd[1471]: time="2025-03-17T17:42:35.309868703Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:42:35.312505 containerd[1471]: time="2025-03-17T17:42:35.310952573Z" level=info msg="TearDown network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" successfully" Mar 17 17:42:35.312505 containerd[1471]: time="2025-03-17T17:42:35.310992162Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" returns successfully" Mar 17 17:42:35.313277 containerd[1471]: time="2025-03-17T17:42:35.312969235Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:42:35.313277 containerd[1471]: time="2025-03-17T17:42:35.313080024Z" level=info msg="TearDown network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" successfully" Mar 17 17:42:35.313277 containerd[1471]: time="2025-03-17T17:42:35.313093050Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" returns successfully" Mar 17 17:42:35.314661 kubelet[2574]: I0317 17:42:35.314269 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080" Mar 17 17:42:35.315660 systemd[1]: run-netns-cni\x2d1f02c71e\x2dc1b2\x2d4708\x2d94b6\x2df96813ab9831.mount: Deactivated successfully. Mar 17 17:42:35.316174 systemd[1]: run-netns-cni\x2d3722c204\x2ddb91\x2dcb85\x2d7912\x2db500fd59eda9.mount: Deactivated successfully. Mar 17 17:42:35.320344 containerd[1471]: time="2025-03-17T17:42:35.317539430Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" Mar 17 17:42:35.320344 containerd[1471]: time="2025-03-17T17:42:35.317757982Z" level=info msg="Ensure that sandbox 8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080 in task-service has been cleanup successfully" Mar 17 17:42:35.320344 containerd[1471]: time="2025-03-17T17:42:35.318323809Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:42:35.320344 containerd[1471]: time="2025-03-17T17:42:35.318419028Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:42:35.320344 containerd[1471]: time="2025-03-17T17:42:35.318431151Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:42:35.316267 systemd[1]: run-netns-cni\x2d1d04e074\x2d3f13\x2d40e4\x2d3269\x2d52ff4d3e6b4e.mount: Deactivated successfully. Mar 17 17:42:35.326630 containerd[1471]: time="2025-03-17T17:42:35.320975065Z" level=info msg="TearDown network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" successfully" Mar 17 17:42:35.326630 containerd[1471]: time="2025-03-17T17:42:35.321020193Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" returns successfully" Mar 17 17:42:35.326630 containerd[1471]: time="2025-03-17T17:42:35.324733266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:42:35.326630 containerd[1471]: time="2025-03-17T17:42:35.326286363Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:42:35.326630 containerd[1471]: time="2025-03-17T17:42:35.326383074Z" level=info msg="TearDown network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" successfully" Mar 17 17:42:35.326630 containerd[1471]: time="2025-03-17T17:42:35.326394606Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" returns successfully" Mar 17 17:42:35.329134 containerd[1471]: time="2025-03-17T17:42:35.328692254Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:42:35.329134 containerd[1471]: time="2025-03-17T17:42:35.328787111Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:42:35.329134 containerd[1471]: time="2025-03-17T17:42:35.328799165Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:42:35.332470 containerd[1471]: time="2025-03-17T17:42:35.332276612Z" level=info msg="TearDown network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" successfully" Mar 17 17:42:35.332470 containerd[1471]: time="2025-03-17T17:42:35.332324867Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" returns successfully" Mar 17 17:42:35.334122 systemd[1]: run-netns-cni\x2d60024966\x2d779e\x2d6ca2\x2d96f6\x2d0cfc2b5f3c4e.mount: Deactivated successfully. Mar 17 17:42:35.336157 containerd[1471]: time="2025-03-17T17:42:35.335916239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:42:35.337136 kubelet[2574]: E0317 17:42:35.336235 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:35.341730 containerd[1471]: time="2025-03-17T17:42:35.337343016Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:42:35.341730 containerd[1471]: time="2025-03-17T17:42:35.340609096Z" level=info msg="TearDown network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" successfully" Mar 17 17:42:35.341730 containerd[1471]: time="2025-03-17T17:42:35.340624276Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" returns successfully" Mar 17 17:42:35.341730 containerd[1471]: time="2025-03-17T17:42:35.340481514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:3,}" Mar 17 17:42:35.341730 containerd[1471]: time="2025-03-17T17:42:35.341310591Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:42:35.341730 containerd[1471]: time="2025-03-17T17:42:35.341404768Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:42:35.341730 containerd[1471]: time="2025-03-17T17:42:35.341417122Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:42:35.351831 kubelet[2574]: E0317 17:42:35.341574 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:35.351831 kubelet[2574]: I0317 17:42:35.344574 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7" Mar 17 17:42:35.351940 containerd[1471]: time="2025-03-17T17:42:35.342403759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:3,}" Mar 17 17:42:35.378934 containerd[1471]: time="2025-03-17T17:42:35.378848177Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" Mar 17 17:42:35.379146 containerd[1471]: time="2025-03-17T17:42:35.379102018Z" level=info msg="Ensure that sandbox 743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7 in task-service has been cleanup successfully" Mar 17 17:42:35.385824 containerd[1471]: time="2025-03-17T17:42:35.385721780Z" level=info msg="TearDown network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" successfully" Mar 17 17:42:35.385824 containerd[1471]: time="2025-03-17T17:42:35.385770456Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" returns successfully" Mar 17 17:42:35.396790 containerd[1471]: time="2025-03-17T17:42:35.393896603Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:42:35.396790 containerd[1471]: time="2025-03-17T17:42:35.394041549Z" level=info msg="TearDown network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" successfully" Mar 17 17:42:35.396790 containerd[1471]: time="2025-03-17T17:42:35.394055958Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" returns successfully" Mar 17 17:42:35.396790 containerd[1471]: time="2025-03-17T17:42:35.394821719Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:42:35.396790 containerd[1471]: time="2025-03-17T17:42:35.394917148Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:42:35.396790 containerd[1471]: time="2025-03-17T17:42:35.394929693Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:42:35.396790 containerd[1471]: time="2025-03-17T17:42:35.395488356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:3,}" Mar 17 17:42:35.395703 systemd[1]: run-netns-cni\x2d6fa7e4fd\x2deb61\x2dde9c\x2dbc17\x2d71365bb52e40.mount: Deactivated successfully. Mar 17 17:42:35.406001 kubelet[2574]: I0317 17:42:35.403643 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c" Mar 17 17:42:35.422827 containerd[1471]: time="2025-03-17T17:42:35.422286865Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" Mar 17 17:42:35.422827 containerd[1471]: time="2025-03-17T17:42:35.422519354Z" level=info msg="Ensure that sandbox d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c in task-service has been cleanup successfully" Mar 17 17:42:35.431143 systemd[1]: run-netns-cni\x2d473134a5\x2d7f14\x2dc7d9\x2d8633\x2d57356f11d700.mount: Deactivated successfully. Mar 17 17:42:35.439099 containerd[1471]: time="2025-03-17T17:42:35.439018685Z" level=info msg="TearDown network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" successfully" Mar 17 17:42:35.439099 containerd[1471]: time="2025-03-17T17:42:35.439075959Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" returns successfully" Mar 17 17:42:35.450881 containerd[1471]: time="2025-03-17T17:42:35.450805070Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:42:35.451110 containerd[1471]: time="2025-03-17T17:42:35.450955106Z" level=info msg="TearDown network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" successfully" Mar 17 17:42:35.451110 containerd[1471]: time="2025-03-17T17:42:35.450968583Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" returns successfully" Mar 17 17:42:35.454052 containerd[1471]: time="2025-03-17T17:42:35.451799233Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:42:35.454052 containerd[1471]: time="2025-03-17T17:42:35.451899892Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:42:35.454052 containerd[1471]: time="2025-03-17T17:42:35.451913509Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:42:35.455713 containerd[1471]: time="2025-03-17T17:42:35.455400514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:3,}" Mar 17 17:42:35.601211 containerd[1471]: time="2025-03-17T17:42:35.601032313Z" level=error msg="Failed to destroy network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.603650 containerd[1471]: time="2025-03-17T17:42:35.603615244Z" level=error msg="encountered an error cleaning up failed sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.603703 containerd[1471]: time="2025-03-17T17:42:35.603685242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.604482 kubelet[2574]: E0317 17:42:35.603937 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.604482 kubelet[2574]: E0317 17:42:35.604019 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:35.604482 kubelet[2574]: E0317 17:42:35.604047 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:35.604648 kubelet[2574]: E0317 17:42:35.604218 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" podUID="e2974f51-cab2-451b-8e3d-274fab1b872e" Mar 17 17:42:35.993033 containerd[1471]: time="2025-03-17T17:42:35.992802601Z" level=error msg="Failed to destroy network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:35.993574 containerd[1471]: time="2025-03-17T17:42:35.993449399Z" level=error msg="Failed to destroy network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.003692 containerd[1471]: time="2025-03-17T17:42:36.003500752Z" level=error msg="encountered an error cleaning up failed sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.003692 containerd[1471]: time="2025-03-17T17:42:36.003587524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.004538 kubelet[2574]: E0317 17:42:36.004095 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.004538 kubelet[2574]: E0317 17:42:36.004188 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:36.004538 kubelet[2574]: E0317 17:42:36.004209 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:36.004738 kubelet[2574]: E0317 17:42:36.004282 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hg76m" podUID="7b8c3717-fa53-41b7-bf24-e6ae52b8b921" Mar 17 17:42:36.011938 containerd[1471]: time="2025-03-17T17:42:36.010436270Z" level=error msg="encountered an error cleaning up failed sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.011938 containerd[1471]: time="2025-03-17T17:42:36.010578861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.012256 kubelet[2574]: E0317 17:42:36.011023 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.012256 kubelet[2574]: E0317 17:42:36.011115 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:36.012256 kubelet[2574]: E0317 17:42:36.011139 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:36.012400 kubelet[2574]: E0317 17:42:36.011276 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b7tm9" podUID="5d480ab2-f501-4510-9ec1-d051e760e88d" Mar 17 17:42:36.082440 containerd[1471]: time="2025-03-17T17:42:36.082150356Z" level=error msg="Failed to destroy network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.085349 containerd[1471]: time="2025-03-17T17:42:36.085306099Z" level=error msg="encountered an error cleaning up failed sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.085478 containerd[1471]: time="2025-03-17T17:42:36.085389223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.088994 kubelet[2574]: E0317 17:42:36.085672 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.088994 kubelet[2574]: E0317 17:42:36.085745 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:36.088994 kubelet[2574]: E0317 17:42:36.085769 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:36.089175 kubelet[2574]: E0317 17:42:36.085812 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:36.092641 containerd[1471]: time="2025-03-17T17:42:36.092566507Z" level=error msg="Failed to destroy network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.098147 containerd[1471]: time="2025-03-17T17:42:36.098101763Z" level=error msg="encountered an error cleaning up failed sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.098347 containerd[1471]: time="2025-03-17T17:42:36.098315835Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.102986 kubelet[2574]: E0317 17:42:36.098675 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.102986 kubelet[2574]: E0317 17:42:36.098737 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:36.102986 kubelet[2574]: E0317 17:42:36.098760 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:36.103362 kubelet[2574]: E0317 17:42:36.098808 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" podUID="e61bbec9-65b1-4228-aa40-669eba7841ea" Mar 17 17:42:36.175483 containerd[1471]: time="2025-03-17T17:42:36.175417696Z" level=error msg="Failed to destroy network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.177538 containerd[1471]: time="2025-03-17T17:42:36.177305159Z" level=error msg="encountered an error cleaning up failed sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.177538 containerd[1471]: time="2025-03-17T17:42:36.177381409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.178285 kubelet[2574]: E0317 17:42:36.177846 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.178285 kubelet[2574]: E0317 17:42:36.177931 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:36.178285 kubelet[2574]: E0317 17:42:36.177957 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:36.178511 kubelet[2574]: E0317 17:42:36.178006 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" podUID="d2c44e13-0cac-42de-9897-344a553902e4" Mar 17 17:42:36.412509 kubelet[2574]: I0317 17:42:36.408900 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:42:36.412509 kubelet[2574]: E0317 17:42:36.409284 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:36.435489 kubelet[2574]: I0317 17:42:36.435407 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966" Mar 17 17:42:36.441902 containerd[1471]: time="2025-03-17T17:42:36.436911142Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\"" Mar 17 17:42:36.441902 containerd[1471]: time="2025-03-17T17:42:36.437195893Z" level=info msg="Ensure that sandbox 39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966 in task-service has been cleanup successfully" Mar 17 17:42:36.441902 containerd[1471]: time="2025-03-17T17:42:36.438424796Z" level=info msg="TearDown network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" successfully" Mar 17 17:42:36.441902 containerd[1471]: time="2025-03-17T17:42:36.438443654Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" returns successfully" Mar 17 17:42:36.449048 containerd[1471]: time="2025-03-17T17:42:36.442497598Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" Mar 17 17:42:36.449048 containerd[1471]: time="2025-03-17T17:42:36.442905072Z" level=info msg="TearDown network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" successfully" Mar 17 17:42:36.449048 containerd[1471]: time="2025-03-17T17:42:36.442975702Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" returns successfully" Mar 17 17:42:36.449048 containerd[1471]: time="2025-03-17T17:42:36.443450518Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:42:36.449048 containerd[1471]: time="2025-03-17T17:42:36.443541147Z" level=info msg="TearDown network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" successfully" Mar 17 17:42:36.449048 containerd[1471]: time="2025-03-17T17:42:36.443553571Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" returns successfully" Mar 17 17:42:36.453027 kubelet[2574]: I0317 17:42:36.452966 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b" Mar 17 17:42:36.453655 systemd[1]: run-netns-cni\x2de99b7677\x2da8c3\x2d56a2\x2d464f\x2dbbc543f7dc41.mount: Deactivated successfully. Mar 17 17:42:36.456145 containerd[1471]: time="2025-03-17T17:42:36.455805573Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\"" Mar 17 17:42:36.456145 containerd[1471]: time="2025-03-17T17:42:36.456042831Z" level=info msg="Ensure that sandbox 40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b in task-service has been cleanup successfully" Mar 17 17:42:36.461945 systemd[1]: run-netns-cni\x2d2a9f5c6e\x2d5c01\x2d85d3\x2d5dc9\x2d2bd1366c60e3.mount: Deactivated successfully. Mar 17 17:42:36.465794 containerd[1471]: time="2025-03-17T17:42:36.465626921Z" level=info msg="TearDown network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" successfully" Mar 17 17:42:36.465794 containerd[1471]: time="2025-03-17T17:42:36.465670337Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" returns successfully" Mar 17 17:42:36.468252 containerd[1471]: time="2025-03-17T17:42:36.467103073Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" Mar 17 17:42:36.468252 containerd[1471]: time="2025-03-17T17:42:36.467258640Z" level=info msg="TearDown network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" successfully" Mar 17 17:42:36.468252 containerd[1471]: time="2025-03-17T17:42:36.467271585Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" returns successfully" Mar 17 17:42:36.474604 containerd[1471]: time="2025-03-17T17:42:36.474517263Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:42:36.475982 containerd[1471]: time="2025-03-17T17:42:36.474681748Z" level=info msg="TearDown network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" successfully" Mar 17 17:42:36.475982 containerd[1471]: time="2025-03-17T17:42:36.474698180Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" returns successfully" Mar 17 17:42:36.475982 containerd[1471]: time="2025-03-17T17:42:36.475262634Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:42:36.475982 containerd[1471]: time="2025-03-17T17:42:36.475353824Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:42:36.475982 containerd[1471]: time="2025-03-17T17:42:36.475365336Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:42:36.476184 kubelet[2574]: E0317 17:42:36.475790 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:36.477525 containerd[1471]: time="2025-03-17T17:42:36.475824081Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:42:36.478139 containerd[1471]: time="2025-03-17T17:42:36.477823985Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:42:36.478139 containerd[1471]: time="2025-03-17T17:42:36.478016384Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:42:36.486023 containerd[1471]: time="2025-03-17T17:42:36.478410281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:4,}" Mar 17 17:42:36.486023 containerd[1471]: time="2025-03-17T17:42:36.478722596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:42:36.486023 containerd[1471]: time="2025-03-17T17:42:36.479978955Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\"" Mar 17 17:42:36.486023 containerd[1471]: time="2025-03-17T17:42:36.480184320Z" level=info msg="Ensure that sandbox 93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d in task-service has been cleanup successfully" Mar 17 17:42:36.486269 kubelet[2574]: I0317 17:42:36.479367 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d" Mar 17 17:42:36.488275 systemd[1]: run-netns-cni\x2d508e8927\x2d13a6\x2dbaa8\x2ddab0\x2dd13ea67fa802.mount: Deactivated successfully. Mar 17 17:42:36.498297 containerd[1471]: time="2025-03-17T17:42:36.498181010Z" level=info msg="TearDown network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" successfully" Mar 17 17:42:36.498297 containerd[1471]: time="2025-03-17T17:42:36.498270918Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" returns successfully" Mar 17 17:42:36.499382 containerd[1471]: time="2025-03-17T17:42:36.499348113Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" Mar 17 17:42:36.499506 containerd[1471]: time="2025-03-17T17:42:36.499474672Z" level=info msg="TearDown network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" successfully" Mar 17 17:42:36.499506 containerd[1471]: time="2025-03-17T17:42:36.499500142Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" returns successfully" Mar 17 17:42:36.505277 containerd[1471]: time="2025-03-17T17:42:36.504321500Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:42:36.505277 containerd[1471]: time="2025-03-17T17:42:36.504441086Z" level=info msg="TearDown network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" successfully" Mar 17 17:42:36.505277 containerd[1471]: time="2025-03-17T17:42:36.504455644Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" returns successfully" Mar 17 17:42:36.510571 containerd[1471]: time="2025-03-17T17:42:36.507968081Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:42:36.510571 containerd[1471]: time="2025-03-17T17:42:36.508124630Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:42:36.510571 containerd[1471]: time="2025-03-17T17:42:36.508175310Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:42:36.510571 containerd[1471]: time="2025-03-17T17:42:36.509817008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:4,}" Mar 17 17:42:36.510571 containerd[1471]: time="2025-03-17T17:42:36.510159313Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\"" Mar 17 17:42:36.510571 containerd[1471]: time="2025-03-17T17:42:36.510352153Z" level=info msg="Ensure that sandbox 79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602 in task-service has been cleanup successfully" Mar 17 17:42:36.510973 kubelet[2574]: I0317 17:42:36.508758 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602" Mar 17 17:42:36.512766 kubelet[2574]: I0317 17:42:36.512744 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8" Mar 17 17:42:36.513991 containerd[1471]: time="2025-03-17T17:42:36.513677731Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\"" Mar 17 17:42:36.513991 containerd[1471]: time="2025-03-17T17:42:36.513858428Z" level=info msg="Ensure that sandbox 0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8 in task-service has been cleanup successfully" Mar 17 17:42:36.515209 kubelet[2574]: I0317 17:42:36.515190 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051" Mar 17 17:42:36.515651 containerd[1471]: time="2025-03-17T17:42:36.515626555Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\"" Mar 17 17:42:36.515907 containerd[1471]: time="2025-03-17T17:42:36.515870406Z" level=info msg="Ensure that sandbox 223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051 in task-service has been cleanup successfully" Mar 17 17:42:36.521637 systemd[1]: run-netns-cni\x2d16df7875\x2dce77\x2d3dc9\x2d79f6\x2d0d6cbcd2e4f6.mount: Deactivated successfully. Mar 17 17:42:36.522294 containerd[1471]: time="2025-03-17T17:42:36.522162364Z" level=info msg="TearDown network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" successfully" Mar 17 17:42:36.522294 containerd[1471]: time="2025-03-17T17:42:36.522197032Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" returns successfully" Mar 17 17:42:36.523012 containerd[1471]: time="2025-03-17T17:42:36.522835661Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" Mar 17 17:42:36.523012 containerd[1471]: time="2025-03-17T17:42:36.522952441Z" level=info msg="TearDown network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" successfully" Mar 17 17:42:36.523012 containerd[1471]: time="2025-03-17T17:42:36.522964235Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" returns successfully" Mar 17 17:42:36.523560 containerd[1471]: time="2025-03-17T17:42:36.523301139Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:42:36.523560 containerd[1471]: time="2025-03-17T17:42:36.523406969Z" level=info msg="TearDown network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" successfully" Mar 17 17:42:36.523560 containerd[1471]: time="2025-03-17T17:42:36.523418671Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" returns successfully" Mar 17 17:42:36.523861 containerd[1471]: time="2025-03-17T17:42:36.523722240Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:42:36.523861 containerd[1471]: time="2025-03-17T17:42:36.523809382Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:42:36.523861 containerd[1471]: time="2025-03-17T17:42:36.523820293Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:42:36.524039 containerd[1471]: time="2025-03-17T17:42:36.524019486Z" level=info msg="TearDown network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" successfully" Mar 17 17:42:36.524123 containerd[1471]: time="2025-03-17T17:42:36.524106939Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" returns successfully" Mar 17 17:42:36.524291 containerd[1471]: time="2025-03-17T17:42:36.524237566Z" level=info msg="TearDown network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" successfully" Mar 17 17:42:36.524291 containerd[1471]: time="2025-03-17T17:42:36.524254800Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" returns successfully" Mar 17 17:42:36.526675 containerd[1471]: time="2025-03-17T17:42:36.526640455Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" Mar 17 17:42:36.526893 containerd[1471]: time="2025-03-17T17:42:36.526765281Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:42:36.527013 containerd[1471]: time="2025-03-17T17:42:36.526684672Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" Mar 17 17:42:36.527176 containerd[1471]: time="2025-03-17T17:42:36.527095512Z" level=info msg="TearDown network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" successfully" Mar 17 17:42:36.527176 containerd[1471]: time="2025-03-17T17:42:36.527108037Z" level=info msg="TearDown network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" successfully" Mar 17 17:42:36.527176 containerd[1471]: time="2025-03-17T17:42:36.527129850Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" returns successfully" Mar 17 17:42:36.528232 containerd[1471]: time="2025-03-17T17:42:36.527110472Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" returns successfully" Mar 17 17:42:36.529709 containerd[1471]: time="2025-03-17T17:42:36.529683486Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:42:36.529969 containerd[1471]: time="2025-03-17T17:42:36.529949010Z" level=info msg="TearDown network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" successfully" Mar 17 17:42:36.530038 containerd[1471]: time="2025-03-17T17:42:36.530022915Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" returns successfully" Mar 17 17:42:36.531498 containerd[1471]: time="2025-03-17T17:42:36.531013188Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:42:36.531498 containerd[1471]: time="2025-03-17T17:42:36.531013328Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:42:36.531498 containerd[1471]: time="2025-03-17T17:42:36.531261819Z" level=info msg="TearDown network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" successfully" Mar 17 17:42:36.531498 containerd[1471]: time="2025-03-17T17:42:36.531275065Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" returns successfully" Mar 17 17:42:36.531498 containerd[1471]: time="2025-03-17T17:42:36.531176981Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:42:36.531498 containerd[1471]: time="2025-03-17T17:42:36.531314733Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:42:36.531992 kubelet[2574]: E0317 17:42:36.531818 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:36.534150 containerd[1471]: time="2025-03-17T17:42:36.533694556Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:42:36.534150 containerd[1471]: time="2025-03-17T17:42:36.533778151Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:42:36.534150 containerd[1471]: time="2025-03-17T17:42:36.533789703Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:42:36.534150 containerd[1471]: time="2025-03-17T17:42:36.533871324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:4,}" Mar 17 17:42:36.536282 containerd[1471]: time="2025-03-17T17:42:36.535020821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:4,}" Mar 17 17:42:36.975223 containerd[1471]: time="2025-03-17T17:42:36.975167615Z" level=error msg="Failed to destroy network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.976321 containerd[1471]: time="2025-03-17T17:42:36.976047921Z" level=error msg="encountered an error cleaning up failed sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.976321 containerd[1471]: time="2025-03-17T17:42:36.976165493Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.976490 kubelet[2574]: E0317 17:42:36.976418 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:36.976665 kubelet[2574]: E0317 17:42:36.976489 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:36.976665 kubelet[2574]: E0317 17:42:36.976520 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:36.976665 kubelet[2574]: E0317 17:42:36.976586 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b7tm9" podUID="5d480ab2-f501-4510-9ec1-d051e760e88d" Mar 17 17:42:37.018006 containerd[1471]: time="2025-03-17T17:42:37.013881560Z" level=error msg="Failed to destroy network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.018421 containerd[1471]: time="2025-03-17T17:42:37.018368547Z" level=error msg="encountered an error cleaning up failed sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.018500 containerd[1471]: time="2025-03-17T17:42:37.018462292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.018776 kubelet[2574]: E0317 17:42:37.018730 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.018834 kubelet[2574]: E0317 17:42:37.018807 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:37.018878 kubelet[2574]: E0317 17:42:37.018835 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:37.018911 kubelet[2574]: E0317 17:42:37.018884 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" podUID="e2974f51-cab2-451b-8e3d-274fab1b872e" Mar 17 17:42:37.050632 containerd[1471]: time="2025-03-17T17:42:37.050434632Z" level=error msg="Failed to destroy network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.051202 containerd[1471]: time="2025-03-17T17:42:37.051173397Z" level=error msg="encountered an error cleaning up failed sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.051354 containerd[1471]: time="2025-03-17T17:42:37.051328273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.056625 kubelet[2574]: E0317 17:42:37.052164 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.056625 kubelet[2574]: E0317 17:42:37.052244 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:37.056625 kubelet[2574]: E0317 17:42:37.052268 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:37.056822 kubelet[2574]: E0317 17:42:37.052317 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" podUID="e61bbec9-65b1-4228-aa40-669eba7841ea" Mar 17 17:42:37.069359 containerd[1471]: time="2025-03-17T17:42:37.069278286Z" level=error msg="Failed to destroy network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.071680 containerd[1471]: time="2025-03-17T17:42:37.071148700Z" level=error msg="encountered an error cleaning up failed sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.071680 containerd[1471]: time="2025-03-17T17:42:37.071211443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.071791 kubelet[2574]: E0317 17:42:37.071419 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.071791 kubelet[2574]: E0317 17:42:37.071478 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:37.071791 kubelet[2574]: E0317 17:42:37.071499 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:37.071914 kubelet[2574]: E0317 17:42:37.071542 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:37.079415 containerd[1471]: time="2025-03-17T17:42:37.079252240Z" level=error msg="Failed to destroy network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.080985 containerd[1471]: time="2025-03-17T17:42:37.080649071Z" level=error msg="encountered an error cleaning up failed sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.081081 containerd[1471]: time="2025-03-17T17:42:37.080720051Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.082229 kubelet[2574]: E0317 17:42:37.081386 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.082229 kubelet[2574]: E0317 17:42:37.081464 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:37.082229 kubelet[2574]: E0317 17:42:37.081488 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:37.082377 kubelet[2574]: E0317 17:42:37.081570 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hg76m" podUID="7b8c3717-fa53-41b7-bf24-e6ae52b8b921" Mar 17 17:42:37.088206 containerd[1471]: time="2025-03-17T17:42:37.088026101Z" level=error msg="Failed to destroy network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.089482 containerd[1471]: time="2025-03-17T17:42:37.089454684Z" level=error msg="encountered an error cleaning up failed sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.089980 containerd[1471]: time="2025-03-17T17:42:37.089581524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.090082 kubelet[2574]: E0317 17:42:37.089784 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:37.090082 kubelet[2574]: E0317 17:42:37.089843 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:37.090082 kubelet[2574]: E0317 17:42:37.089875 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:37.090200 kubelet[2574]: E0317 17:42:37.089931 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" podUID="d2c44e13-0cac-42de-9897-344a553902e4" Mar 17 17:42:37.209054 systemd[1]: run-netns-cni\x2dee394ee1\x2d22a5\x2d84f2\x2d796b\x2d764dd803a021.mount: Deactivated successfully. Mar 17 17:42:37.209176 systemd[1]: run-netns-cni\x2df525bc7c\x2d5f33\x2d3f09\x2deac7\x2d4dcefbd38849.mount: Deactivated successfully. Mar 17 17:42:37.519265 kubelet[2574]: I0317 17:42:37.519219 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6" Mar 17 17:42:37.519958 containerd[1471]: time="2025-03-17T17:42:37.519911745Z" level=info msg="StopPodSandbox for \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\"" Mar 17 17:42:37.520531 containerd[1471]: time="2025-03-17T17:42:37.520131328Z" level=info msg="Ensure that sandbox 4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6 in task-service has been cleanup successfully" Mar 17 17:42:37.521260 containerd[1471]: time="2025-03-17T17:42:37.521165014Z" level=info msg="TearDown network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" successfully" Mar 17 17:42:37.521260 containerd[1471]: time="2025-03-17T17:42:37.521199261Z" level=info msg="StopPodSandbox for \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" returns successfully" Mar 17 17:42:37.523243 containerd[1471]: time="2025-03-17T17:42:37.523195493Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\"" Mar 17 17:42:37.524082 systemd[1]: run-netns-cni\x2d2b33e207\x2d968d\x2db89f\x2d134c\x2dca2b082e5cfd.mount: Deactivated successfully. Mar 17 17:42:37.525307 kubelet[2574]: I0317 17:42:37.524706 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42" Mar 17 17:42:37.525377 containerd[1471]: time="2025-03-17T17:42:37.525202106Z" level=info msg="StopPodSandbox for \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\"" Mar 17 17:42:37.525802 containerd[1471]: time="2025-03-17T17:42:37.525741969Z" level=info msg="Ensure that sandbox 627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42 in task-service has been cleanup successfully" Mar 17 17:42:37.526145 containerd[1471]: time="2025-03-17T17:42:37.526115064Z" level=info msg="TearDown network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" successfully" Mar 17 17:42:37.526145 containerd[1471]: time="2025-03-17T17:42:37.526133460Z" level=info msg="StopPodSandbox for \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" returns successfully" Mar 17 17:42:37.527265 containerd[1471]: time="2025-03-17T17:42:37.527230942Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\"" Mar 17 17:42:37.527360 containerd[1471]: time="2025-03-17T17:42:37.527334957Z" level=info msg="TearDown network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" successfully" Mar 17 17:42:37.527360 containerd[1471]: time="2025-03-17T17:42:37.527355678Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" returns successfully" Mar 17 17:42:37.527878 containerd[1471]: time="2025-03-17T17:42:37.527854710Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" Mar 17 17:42:37.527960 containerd[1471]: time="2025-03-17T17:42:37.527931221Z" level=info msg="TearDown network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" successfully" Mar 17 17:42:37.527960 containerd[1471]: time="2025-03-17T17:42:37.527948044Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" returns successfully" Mar 17 17:42:37.528538 containerd[1471]: time="2025-03-17T17:42:37.528226672Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:42:37.528538 containerd[1471]: time="2025-03-17T17:42:37.528319966Z" level=info msg="TearDown network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" successfully" Mar 17 17:42:37.528538 containerd[1471]: time="2025-03-17T17:42:37.528330988Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" returns successfully" Mar 17 17:42:37.528811 containerd[1471]: time="2025-03-17T17:42:37.528769762Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:42:37.528860 systemd[1]: run-netns-cni\x2dfddae307\x2dfa3f\x2dc27e\x2d50b0\x2d3666288c7333.mount: Deactivated successfully. Mar 17 17:42:37.529595 kubelet[2574]: I0317 17:42:37.529108 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8" Mar 17 17:42:37.529627 containerd[1471]: time="2025-03-17T17:42:37.528855932Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:42:37.529627 containerd[1471]: time="2025-03-17T17:42:37.528869378Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:42:37.529627 containerd[1471]: time="2025-03-17T17:42:37.529272552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:5,}" Mar 17 17:42:37.530162 containerd[1471]: time="2025-03-17T17:42:37.529837576Z" level=info msg="StopPodSandbox for \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\"" Mar 17 17:42:37.530162 containerd[1471]: time="2025-03-17T17:42:37.530028291Z" level=info msg="Ensure that sandbox fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8 in task-service has been cleanup successfully" Mar 17 17:42:37.530248 containerd[1471]: time="2025-03-17T17:42:37.530218706Z" level=info msg="TearDown network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" successfully" Mar 17 17:42:37.530248 containerd[1471]: time="2025-03-17T17:42:37.530230149Z" level=info msg="StopPodSandbox for \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" returns successfully" Mar 17 17:42:37.530529 containerd[1471]: time="2025-03-17T17:42:37.530499348Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\"" Mar 17 17:42:37.532877 containerd[1471]: time="2025-03-17T17:42:37.530578635Z" level=info msg="TearDown network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" successfully" Mar 17 17:42:37.532877 containerd[1471]: time="2025-03-17T17:42:37.530593644Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" returns successfully" Mar 17 17:42:37.532877 containerd[1471]: time="2025-03-17T17:42:37.530929666Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" Mar 17 17:42:37.532877 containerd[1471]: time="2025-03-17T17:42:37.531008883Z" level=info msg="TearDown network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" successfully" Mar 17 17:42:37.532877 containerd[1471]: time="2025-03-17T17:42:37.531017769Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" returns successfully" Mar 17 17:42:37.532877 containerd[1471]: time="2025-03-17T17:42:37.531318252Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:42:37.532421 systemd[1]: run-netns-cni\x2d842ebd44\x2da4a6\x2d941f\x2d77d0\x2dad361dc15750.mount: Deactivated successfully. Mar 17 17:42:37.538617 kubelet[2574]: I0317 17:42:37.538579 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0" Mar 17 17:42:37.539047 containerd[1471]: time="2025-03-17T17:42:37.539018026Z" level=info msg="StopPodSandbox for \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\"" Mar 17 17:42:37.539249 containerd[1471]: time="2025-03-17T17:42:37.539219804Z" level=info msg="Ensure that sandbox ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0 in task-service has been cleanup successfully" Mar 17 17:42:37.539496 containerd[1471]: time="2025-03-17T17:42:37.539435429Z" level=info msg="TearDown network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" successfully" Mar 17 17:42:37.539496 containerd[1471]: time="2025-03-17T17:42:37.539451580Z" level=info msg="StopPodSandbox for \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" returns successfully" Mar 17 17:42:37.541839 systemd[1]: run-netns-cni\x2d406c703d\x2df480\x2de203\x2d657a\x2d2f300603a951.mount: Deactivated successfully. Mar 17 17:42:37.542379 containerd[1471]: time="2025-03-17T17:42:37.542350610Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\"" Mar 17 17:42:37.542447 containerd[1471]: time="2025-03-17T17:42:37.542439164Z" level=info msg="TearDown network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" successfully" Mar 17 17:42:37.542496 containerd[1471]: time="2025-03-17T17:42:37.542449966Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" returns successfully" Mar 17 17:42:37.542709 containerd[1471]: time="2025-03-17T17:42:37.542691141Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" Mar 17 17:42:37.542776 containerd[1471]: time="2025-03-17T17:42:37.542762231Z" level=info msg="TearDown network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" successfully" Mar 17 17:42:37.542807 containerd[1471]: time="2025-03-17T17:42:37.542776308Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" returns successfully" Mar 17 17:42:37.543030 containerd[1471]: time="2025-03-17T17:42:37.543011481Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:42:37.543110 containerd[1471]: time="2025-03-17T17:42:37.543097321Z" level=info msg="TearDown network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" successfully" Mar 17 17:42:37.543135 containerd[1471]: time="2025-03-17T17:42:37.543109605Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" returns successfully" Mar 17 17:42:37.543347 containerd[1471]: time="2025-03-17T17:42:37.543329749Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:42:37.543416 containerd[1471]: time="2025-03-17T17:42:37.543403043Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:42:37.543439 containerd[1471]: time="2025-03-17T17:42:37.543414766Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:42:37.543771 kubelet[2574]: E0317 17:42:37.543667 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:37.543855 containerd[1471]: time="2025-03-17T17:42:37.543833430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:5,}" Mar 17 17:42:37.544422 kubelet[2574]: I0317 17:42:37.544186 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13" Mar 17 17:42:37.544492 containerd[1471]: time="2025-03-17T17:42:37.544469152Z" level=info msg="StopPodSandbox for \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\"" Mar 17 17:42:37.544638 containerd[1471]: time="2025-03-17T17:42:37.544613777Z" level=info msg="Ensure that sandbox cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13 in task-service has been cleanup successfully" Mar 17 17:42:37.544800 containerd[1471]: time="2025-03-17T17:42:37.544786036Z" level=info msg="TearDown network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" successfully" Mar 17 17:42:37.544833 containerd[1471]: time="2025-03-17T17:42:37.544799654Z" level=info msg="StopPodSandbox for \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" returns successfully" Mar 17 17:42:37.545139 containerd[1471]: time="2025-03-17T17:42:37.545117049Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\"" Mar 17 17:42:37.545248 containerd[1471]: time="2025-03-17T17:42:37.545194211Z" level=info msg="TearDown network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" successfully" Mar 17 17:42:37.545248 containerd[1471]: time="2025-03-17T17:42:37.545202607Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" returns successfully" Mar 17 17:42:37.545550 containerd[1471]: time="2025-03-17T17:42:37.545533008Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" Mar 17 17:42:37.545614 containerd[1471]: time="2025-03-17T17:42:37.545601132Z" level=info msg="TearDown network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" successfully" Mar 17 17:42:37.545645 containerd[1471]: time="2025-03-17T17:42:37.545612585Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" returns successfully" Mar 17 17:42:37.545898 containerd[1471]: time="2025-03-17T17:42:37.545857227Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:42:37.545944 containerd[1471]: time="2025-03-17T17:42:37.545926794Z" level=info msg="TearDown network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" successfully" Mar 17 17:42:37.545944 containerd[1471]: time="2025-03-17T17:42:37.545936473Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" returns successfully" Mar 17 17:42:37.546281 containerd[1471]: time="2025-03-17T17:42:37.546263416Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:42:37.546343 containerd[1471]: time="2025-03-17T17:42:37.546330469Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:42:37.546370 containerd[1471]: time="2025-03-17T17:42:37.546341170Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:42:37.546745 kubelet[2574]: E0317 17:42:37.546652 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:37.546848 containerd[1471]: time="2025-03-17T17:42:37.546829451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:5,}" Mar 17 17:42:37.547333 kubelet[2574]: E0317 17:42:37.547317 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:37.547384 kubelet[2574]: I0317 17:42:37.547372 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533" Mar 17 17:42:37.547656 containerd[1471]: time="2025-03-17T17:42:37.547632052Z" level=info msg="StopPodSandbox for \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\"" Mar 17 17:42:37.547791 containerd[1471]: time="2025-03-17T17:42:37.547770595Z" level=info msg="Ensure that sandbox baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533 in task-service has been cleanup successfully" Mar 17 17:42:37.547993 containerd[1471]: time="2025-03-17T17:42:37.547976892Z" level=info msg="TearDown network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" successfully" Mar 17 17:42:37.547993 containerd[1471]: time="2025-03-17T17:42:37.547991199Z" level=info msg="StopPodSandbox for \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" returns successfully" Mar 17 17:42:37.548236 containerd[1471]: time="2025-03-17T17:42:37.548212896Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\"" Mar 17 17:42:37.548299 containerd[1471]: time="2025-03-17T17:42:37.548285319Z" level=info msg="TearDown network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" successfully" Mar 17 17:42:37.548325 containerd[1471]: time="2025-03-17T17:42:37.548297322Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" returns successfully" Mar 17 17:42:37.548507 containerd[1471]: time="2025-03-17T17:42:37.548485373Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" Mar 17 17:42:37.548565 containerd[1471]: time="2025-03-17T17:42:37.548552014Z" level=info msg="TearDown network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" successfully" Mar 17 17:42:37.548646 containerd[1471]: time="2025-03-17T17:42:37.548563166Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" returns successfully" Mar 17 17:42:37.548903 containerd[1471]: time="2025-03-17T17:42:37.548738892Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:42:37.548903 containerd[1471]: time="2025-03-17T17:42:37.548807647Z" level=info msg="TearDown network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" successfully" Mar 17 17:42:37.548903 containerd[1471]: time="2025-03-17T17:42:37.548816655Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" returns successfully" Mar 17 17:42:37.806308 containerd[1471]: time="2025-03-17T17:42:37.805289589Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:42:37.806308 containerd[1471]: time="2025-03-17T17:42:37.805448011Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:42:37.806308 containerd[1471]: time="2025-03-17T17:42:37.805460035Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.807161175Z" level=info msg="TearDown network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" successfully" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.807189762Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" returns successfully" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.807338695Z" level=info msg="TearDown network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" successfully" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.807351911Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" returns successfully" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.807948405Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.808080636Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.808099343Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:42:37.808168 containerd[1471]: time="2025-03-17T17:42:37.808163900Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" Mar 17 17:42:37.808372 containerd[1471]: time="2025-03-17T17:42:37.808249759Z" level=info msg="TearDown network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" successfully" Mar 17 17:42:37.808372 containerd[1471]: time="2025-03-17T17:42:37.808266000Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" returns successfully" Mar 17 17:42:37.808372 containerd[1471]: time="2025-03-17T17:42:37.808351840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:5,}" Mar 17 17:42:37.810336 containerd[1471]: time="2025-03-17T17:42:37.810315398Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:42:37.810533 containerd[1471]: time="2025-03-17T17:42:37.810511864Z" level=info msg="TearDown network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" successfully" Mar 17 17:42:37.810592 containerd[1471]: time="2025-03-17T17:42:37.810579037Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" returns successfully" Mar 17 17:42:37.810661 containerd[1471]: time="2025-03-17T17:42:37.810631921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:42:37.811149 containerd[1471]: time="2025-03-17T17:42:37.810941681Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:42:37.811149 containerd[1471]: time="2025-03-17T17:42:37.811042058Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:42:37.811149 containerd[1471]: time="2025-03-17T17:42:37.811056778Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:42:37.811396 containerd[1471]: time="2025-03-17T17:42:37.811374423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:5,}" Mar 17 17:42:38.205906 systemd[1]: run-netns-cni\x2d45b3dad4\x2d9df2\x2d2cbd\x2d8ffc\x2d7743b80e57c3.mount: Deactivated successfully. Mar 17 17:42:38.206012 systemd[1]: run-netns-cni\x2d66858613\x2df31f\x2dbb1c\x2db188\x2d90f3ea84a142.mount: Deactivated successfully. Mar 17 17:42:38.774238 systemd[1]: Started sshd@9-10.0.0.46:22-10.0.0.1:40180.service - OpenSSH per-connection server daemon (10.0.0.1:40180). Mar 17 17:42:38.828494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount349975890.mount: Deactivated successfully. Mar 17 17:42:38.837783 sshd[4499]: Accepted publickey for core from 10.0.0.1 port 40180 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:42:38.839849 sshd-session[4499]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:38.844369 systemd-logind[1456]: New session 9 of user core. Mar 17 17:42:38.857241 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:42:39.069667 sshd[4501]: Connection closed by 10.0.0.1 port 40180 Mar 17 17:42:39.070088 sshd-session[4499]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:39.075261 systemd[1]: sshd@9-10.0.0.46:22-10.0.0.1:40180.service: Deactivated successfully. Mar 17 17:42:39.077585 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:42:39.078285 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:42:39.079338 systemd-logind[1456]: Removed session 9. Mar 17 17:42:40.723895 containerd[1471]: time="2025-03-17T17:42:40.723848362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:40.757140 containerd[1471]: time="2025-03-17T17:42:40.756001900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:42:40.806168 containerd[1471]: time="2025-03-17T17:42:40.805970548Z" level=error msg="Failed to destroy network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.809599 containerd[1471]: time="2025-03-17T17:42:40.806756090Z" level=error msg="encountered an error cleaning up failed sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.809599 containerd[1471]: time="2025-03-17T17:42:40.806847108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.809599 containerd[1471]: time="2025-03-17T17:42:40.807983038Z" level=error msg="Failed to destroy network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.809806 kubelet[2574]: E0317 17:42:40.807142 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.809806 kubelet[2574]: E0317 17:42:40.807216 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:40.809806 kubelet[2574]: E0317 17:42:40.807237 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" Mar 17 17:42:40.810308 kubelet[2574]: E0317 17:42:40.807279 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8b46cd865-7kxzr_calico-system(d2c44e13-0cac-42de-9897-344a553902e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" podUID="d2c44e13-0cac-42de-9897-344a553902e4" Mar 17 17:42:40.810787 containerd[1471]: time="2025-03-17T17:42:40.810725870Z" level=error msg="encountered an error cleaning up failed sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.813596 containerd[1471]: time="2025-03-17T17:42:40.810817390Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.813596 containerd[1471]: time="2025-03-17T17:42:40.811995602Z" level=error msg="Failed to destroy network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.813596 containerd[1471]: time="2025-03-17T17:42:40.812506114Z" level=error msg="encountered an error cleaning up failed sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.813596 containerd[1471]: time="2025-03-17T17:42:40.812565570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.813819 kubelet[2574]: E0317 17:42:40.811082 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.813819 kubelet[2574]: E0317 17:42:40.811168 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:40.813819 kubelet[2574]: E0317 17:42:40.811193 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lsjhx" Mar 17 17:42:40.813981 kubelet[2574]: E0317 17:42:40.811251 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lsjhx_calico-system(c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lsjhx" podUID="c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d" Mar 17 17:42:40.813981 kubelet[2574]: E0317 17:42:40.812730 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:40.813981 kubelet[2574]: E0317 17:42:40.812768 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:40.814141 kubelet[2574]: E0317 17:42:40.812789 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-hg76m" Mar 17 17:42:40.814141 kubelet[2574]: E0317 17:42:40.812827 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-hg76m_kube-system(7b8c3717-fa53-41b7-bf24-e6ae52b8b921)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-hg76m" podUID="7b8c3717-fa53-41b7-bf24-e6ae52b8b921" Mar 17 17:42:41.055831 containerd[1471]: time="2025-03-17T17:42:41.055705383Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:41.057618 containerd[1471]: time="2025-03-17T17:42:41.056118243Z" level=error msg="Failed to destroy network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.057618 containerd[1471]: time="2025-03-17T17:42:41.057346059Z" level=error msg="encountered an error cleaning up failed sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.057618 containerd[1471]: time="2025-03-17T17:42:41.057411147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.058018 kubelet[2574]: E0317 17:42:41.057894 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.058018 kubelet[2574]: E0317 17:42:41.057963 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:41.058018 kubelet[2574]: E0317 17:42:41.057984 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-b7tm9" Mar 17 17:42:41.058287 kubelet[2574]: E0317 17:42:41.058031 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-b7tm9_kube-system(5d480ab2-f501-4510-9ec1-d051e760e88d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-b7tm9" podUID="5d480ab2-f501-4510-9ec1-d051e760e88d" Mar 17 17:42:41.087893 containerd[1471]: time="2025-03-17T17:42:41.087696230Z" level=error msg="Failed to destroy network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.091450 containerd[1471]: time="2025-03-17T17:42:41.091400602Z" level=error msg="encountered an error cleaning up failed sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.091524 containerd[1471]: time="2025-03-17T17:42:41.091478303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.092362 kubelet[2574]: E0317 17:42:41.092216 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.092362 kubelet[2574]: E0317 17:42:41.092284 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:41.092362 kubelet[2574]: E0317 17:42:41.092301 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" Mar 17 17:42:41.092554 kubelet[2574]: E0317 17:42:41.092345 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-lhw82_calico-apiserver(e61bbec9-65b1-4228-aa40-669eba7841ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" podUID="e61bbec9-65b1-4228-aa40-669eba7841ea" Mar 17 17:42:41.093622 containerd[1471]: time="2025-03-17T17:42:41.093591015Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:41.096116 containerd[1471]: time="2025-03-17T17:42:41.094705910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 10.913663133s" Mar 17 17:42:41.096116 containerd[1471]: time="2025-03-17T17:42:41.094731991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:42:41.108192 containerd[1471]: time="2025-03-17T17:42:41.108106638Z" level=info msg="CreateContainer within sandbox \"85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:42:41.131440 containerd[1471]: time="2025-03-17T17:42:41.131369036Z" level=error msg="Failed to destroy network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.131876 containerd[1471]: time="2025-03-17T17:42:41.131834608Z" level=error msg="encountered an error cleaning up failed sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.131940 containerd[1471]: time="2025-03-17T17:42:41.131903683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.132228 kubelet[2574]: E0317 17:42:41.132184 2574 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:42:41.132317 kubelet[2574]: E0317 17:42:41.132254 2574 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:41.132317 kubelet[2574]: E0317 17:42:41.132279 2574 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" Mar 17 17:42:41.132458 kubelet[2574]: E0317 17:42:41.132340 2574 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b6b5f678f-v8plm_calico-apiserver(e2974f51-cab2-451b-8e3d-274fab1b872e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" podUID="e2974f51-cab2-451b-8e3d-274fab1b872e" Mar 17 17:42:41.202600 containerd[1471]: time="2025-03-17T17:42:41.202537977Z" level=info msg="CreateContainer within sandbox \"85cf3996f5fc2b1c773d089f1e9003eacbde469cb850cd6ba0a2eb61c2d7d06b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"66ad02f88b09fe6244f085ac8aa08436e7555d52e7a6b7a182f6d919f0b64042\"" Mar 17 17:42:41.203138 containerd[1471]: time="2025-03-17T17:42:41.203093637Z" level=info msg="StartContainer for \"66ad02f88b09fe6244f085ac8aa08436e7555d52e7a6b7a182f6d919f0b64042\"" Mar 17 17:42:41.278288 systemd[1]: Started cri-containerd-66ad02f88b09fe6244f085ac8aa08436e7555d52e7a6b7a182f6d919f0b64042.scope - libcontainer container 66ad02f88b09fe6244f085ac8aa08436e7555d52e7a6b7a182f6d919f0b64042. Mar 17 17:42:41.321731 containerd[1471]: time="2025-03-17T17:42:41.321615067Z" level=info msg="StartContainer for \"66ad02f88b09fe6244f085ac8aa08436e7555d52e7a6b7a182f6d919f0b64042\" returns successfully" Mar 17 17:42:41.401633 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:42:41.402582 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:42:41.557351 kubelet[2574]: I0317 17:42:41.557272 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2" Mar 17 17:42:41.558656 containerd[1471]: time="2025-03-17T17:42:41.558560416Z" level=info msg="StopPodSandbox for \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\"" Mar 17 17:42:41.559075 containerd[1471]: time="2025-03-17T17:42:41.559003986Z" level=info msg="Ensure that sandbox b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2 in task-service has been cleanup successfully" Mar 17 17:42:41.559873 containerd[1471]: time="2025-03-17T17:42:41.559845334Z" level=info msg="TearDown network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\" successfully" Mar 17 17:42:41.559873 containerd[1471]: time="2025-03-17T17:42:41.559870044Z" level=info msg="StopPodSandbox for \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\" returns successfully" Mar 17 17:42:41.560343 containerd[1471]: time="2025-03-17T17:42:41.560291089Z" level=info msg="StopPodSandbox for \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\"" Mar 17 17:42:41.560461 containerd[1471]: time="2025-03-17T17:42:41.560396816Z" level=info msg="TearDown network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" successfully" Mar 17 17:42:41.560461 containerd[1471]: time="2025-03-17T17:42:41.560407767Z" level=info msg="StopPodSandbox for \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" returns successfully" Mar 17 17:42:41.560785 containerd[1471]: time="2025-03-17T17:42:41.560766691Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\"" Mar 17 17:42:41.560863 containerd[1471]: time="2025-03-17T17:42:41.560848882Z" level=info msg="TearDown network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" successfully" Mar 17 17:42:41.560904 containerd[1471]: time="2025-03-17T17:42:41.560861837Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" returns successfully" Mar 17 17:42:41.561216 containerd[1471]: time="2025-03-17T17:42:41.561185352Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" Mar 17 17:42:41.561296 containerd[1471]: time="2025-03-17T17:42:41.561276971Z" level=info msg="TearDown network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" successfully" Mar 17 17:42:41.561296 containerd[1471]: time="2025-03-17T17:42:41.561293574Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" returns successfully" Mar 17 17:42:41.561651 containerd[1471]: time="2025-03-17T17:42:41.561632117Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:42:41.561749 containerd[1471]: time="2025-03-17T17:42:41.561733096Z" level=info msg="TearDown network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" successfully" Mar 17 17:42:41.561801 containerd[1471]: time="2025-03-17T17:42:41.561748025Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" returns successfully" Mar 17 17:42:41.562393 containerd[1471]: time="2025-03-17T17:42:41.562369292Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:42:41.562472 containerd[1471]: time="2025-03-17T17:42:41.562458988Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:42:41.562520 containerd[1471]: time="2025-03-17T17:42:41.562470670Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:42:41.563114 kubelet[2574]: E0317 17:42:41.562689 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:41.563114 kubelet[2574]: I0317 17:42:41.563002 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393" Mar 17 17:42:41.563234 containerd[1471]: time="2025-03-17T17:42:41.562944690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:6,}" Mar 17 17:42:41.563749 containerd[1471]: time="2025-03-17T17:42:41.563711413Z" level=info msg="StopPodSandbox for \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\"" Mar 17 17:42:41.563933 containerd[1471]: time="2025-03-17T17:42:41.563870365Z" level=info msg="Ensure that sandbox 5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393 in task-service has been cleanup successfully" Mar 17 17:42:41.564188 containerd[1471]: time="2025-03-17T17:42:41.564145944Z" level=info msg="TearDown network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\" successfully" Mar 17 17:42:41.564188 containerd[1471]: time="2025-03-17T17:42:41.564168829Z" level=info msg="StopPodSandbox for \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\" returns successfully" Mar 17 17:42:41.564516 containerd[1471]: time="2025-03-17T17:42:41.564492103Z" level=info msg="StopPodSandbox for \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\"" Mar 17 17:42:41.564608 containerd[1471]: time="2025-03-17T17:42:41.564586408Z" level=info msg="TearDown network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" successfully" Mar 17 17:42:41.564608 containerd[1471]: time="2025-03-17T17:42:41.564603682Z" level=info msg="StopPodSandbox for \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" returns successfully" Mar 17 17:42:41.564966 containerd[1471]: time="2025-03-17T17:42:41.564930482Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\"" Mar 17 17:42:41.565081 containerd[1471]: time="2025-03-17T17:42:41.565014367Z" level=info msg="TearDown network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" successfully" Mar 17 17:42:41.565081 containerd[1471]: time="2025-03-17T17:42:41.565078032Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" returns successfully" Mar 17 17:42:41.565333 containerd[1471]: time="2025-03-17T17:42:41.565300037Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" Mar 17 17:42:41.565429 containerd[1471]: time="2025-03-17T17:42:41.565413740Z" level=info msg="TearDown network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" successfully" Mar 17 17:42:41.565465 containerd[1471]: time="2025-03-17T17:42:41.565426736Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" returns successfully" Mar 17 17:42:41.565879 containerd[1471]: time="2025-03-17T17:42:41.565726733Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:42:41.565879 containerd[1471]: time="2025-03-17T17:42:41.565815437Z" level=info msg="TearDown network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" successfully" Mar 17 17:42:41.565879 containerd[1471]: time="2025-03-17T17:42:41.565828974Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" returns successfully" Mar 17 17:42:41.566134 containerd[1471]: time="2025-03-17T17:42:41.566113401Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:42:41.566288 containerd[1471]: time="2025-03-17T17:42:41.566268835Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:42:41.566444 containerd[1471]: time="2025-03-17T17:42:41.566348913Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:42:41.566502 kubelet[2574]: I0317 17:42:41.566453 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956" Mar 17 17:42:41.566770 containerd[1471]: time="2025-03-17T17:42:41.566731373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:6,}" Mar 17 17:42:41.567039 containerd[1471]: time="2025-03-17T17:42:41.567014247Z" level=info msg="StopPodSandbox for \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\"" Mar 17 17:42:41.567502 containerd[1471]: time="2025-03-17T17:42:41.567371848Z" level=info msg="Ensure that sandbox 9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956 in task-service has been cleanup successfully" Mar 17 17:42:41.567793 containerd[1471]: time="2025-03-17T17:42:41.567726273Z" level=info msg="TearDown network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\" successfully" Mar 17 17:42:41.567793 containerd[1471]: time="2025-03-17T17:42:41.567742695Z" level=info msg="StopPodSandbox for \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\" returns successfully" Mar 17 17:42:41.568039 containerd[1471]: time="2025-03-17T17:42:41.567922608Z" level=info msg="StopPodSandbox for \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\"" Mar 17 17:42:41.568039 containerd[1471]: time="2025-03-17T17:42:41.568018606Z" level=info msg="TearDown network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" successfully" Mar 17 17:42:41.568039 containerd[1471]: time="2025-03-17T17:42:41.568030449Z" level=info msg="StopPodSandbox for \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" returns successfully" Mar 17 17:42:41.568403 containerd[1471]: time="2025-03-17T17:42:41.568381427Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\"" Mar 17 17:42:41.568598 containerd[1471]: time="2025-03-17T17:42:41.568541951Z" level=info msg="TearDown network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" successfully" Mar 17 17:42:41.568598 containerd[1471]: time="2025-03-17T17:42:41.568556220Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" returns successfully" Mar 17 17:42:41.568803 containerd[1471]: time="2025-03-17T17:42:41.568779047Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" Mar 17 17:42:41.568882 containerd[1471]: time="2025-03-17T17:42:41.568851579Z" level=info msg="TearDown network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" successfully" Mar 17 17:42:41.568882 containerd[1471]: time="2025-03-17T17:42:41.568861548Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" returns successfully" Mar 17 17:42:41.569384 kubelet[2574]: E0317 17:42:41.569348 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:41.569668 containerd[1471]: time="2025-03-17T17:42:41.569524026Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:42:41.569668 containerd[1471]: time="2025-03-17T17:42:41.569614925Z" level=info msg="TearDown network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" successfully" Mar 17 17:42:41.569668 containerd[1471]: time="2025-03-17T17:42:41.569625085Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" returns successfully" Mar 17 17:42:41.571399 containerd[1471]: time="2025-03-17T17:42:41.571240421Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:42:41.571399 containerd[1471]: time="2025-03-17T17:42:41.571322542Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:42:41.571399 containerd[1471]: time="2025-03-17T17:42:41.571332983Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:42:41.571864 containerd[1471]: time="2025-03-17T17:42:41.571795489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:6,}" Mar 17 17:42:41.573858 kubelet[2574]: I0317 17:42:41.573511 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45" Mar 17 17:42:41.573982 containerd[1471]: time="2025-03-17T17:42:41.573950724Z" level=info msg="StopPodSandbox for \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\"" Mar 17 17:42:41.575371 containerd[1471]: time="2025-03-17T17:42:41.575339125Z" level=info msg="Ensure that sandbox 876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45 in task-service has been cleanup successfully" Mar 17 17:42:41.575736 containerd[1471]: time="2025-03-17T17:42:41.575700143Z" level=info msg="TearDown network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\" successfully" Mar 17 17:42:41.575736 containerd[1471]: time="2025-03-17T17:42:41.575722717Z" level=info msg="StopPodSandbox for \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\" returns successfully" Mar 17 17:42:41.576173 containerd[1471]: time="2025-03-17T17:42:41.576144283Z" level=info msg="StopPodSandbox for \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\"" Mar 17 17:42:41.576173 containerd[1471]: time="2025-03-17T17:42:41.576244811Z" level=info msg="TearDown network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" successfully" Mar 17 17:42:41.576510 containerd[1471]: time="2025-03-17T17:42:41.576257405Z" level=info msg="StopPodSandbox for \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" returns successfully" Mar 17 17:42:41.576639 containerd[1471]: time="2025-03-17T17:42:41.576613373Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\"" Mar 17 17:42:41.576809 containerd[1471]: time="2025-03-17T17:42:41.576711785Z" level=info msg="TearDown network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" successfully" Mar 17 17:42:41.576809 containerd[1471]: time="2025-03-17T17:42:41.576730843Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" returns successfully" Mar 17 17:42:41.576935 kubelet[2574]: I0317 17:42:41.576912 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb" Mar 17 17:42:41.577270 containerd[1471]: time="2025-03-17T17:42:41.577249199Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" Mar 17 17:42:41.577353 containerd[1471]: time="2025-03-17T17:42:41.577330939Z" level=info msg="TearDown network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" successfully" Mar 17 17:42:41.577353 containerd[1471]: time="2025-03-17T17:42:41.577349175Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" returns successfully" Mar 17 17:42:41.577943 containerd[1471]: time="2025-03-17T17:42:41.577829657Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:42:41.577943 containerd[1471]: time="2025-03-17T17:42:41.577911167Z" level=info msg="StopPodSandbox for \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\"" Mar 17 17:42:41.578210 containerd[1471]: time="2025-03-17T17:42:41.578140065Z" level=info msg="Ensure that sandbox 438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb in task-service has been cleanup successfully" Mar 17 17:42:41.578253 containerd[1471]: time="2025-03-17T17:42:41.577914443Z" level=info msg="TearDown network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" successfully" Mar 17 17:42:41.578253 containerd[1471]: time="2025-03-17T17:42:41.578235572Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" returns successfully" Mar 17 17:42:41.578470 containerd[1471]: time="2025-03-17T17:42:41.578448029Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:42:41.578558 containerd[1471]: time="2025-03-17T17:42:41.578541012Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:42:41.578603 containerd[1471]: time="2025-03-17T17:42:41.578556382Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:42:41.579147 containerd[1471]: time="2025-03-17T17:42:41.579121299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:6,}" Mar 17 17:42:41.579348 containerd[1471]: time="2025-03-17T17:42:41.579263457Z" level=info msg="TearDown network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\" successfully" Mar 17 17:42:41.579348 containerd[1471]: time="2025-03-17T17:42:41.579286482Z" level=info msg="StopPodSandbox for \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\" returns successfully" Mar 17 17:42:41.579724 containerd[1471]: time="2025-03-17T17:42:41.579684563Z" level=info msg="StopPodSandbox for \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\"" Mar 17 17:42:41.579789 containerd[1471]: time="2025-03-17T17:42:41.579773578Z" level=info msg="TearDown network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" successfully" Mar 17 17:42:41.579831 containerd[1471]: time="2025-03-17T17:42:41.579788026Z" level=info msg="StopPodSandbox for \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" returns successfully" Mar 17 17:42:41.580117 containerd[1471]: time="2025-03-17T17:42:41.580096911Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\"" Mar 17 17:42:41.580194 containerd[1471]: time="2025-03-17T17:42:41.580177148Z" level=info msg="TearDown network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" successfully" Mar 17 17:42:41.580236 containerd[1471]: time="2025-03-17T17:42:41.580191987Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" returns successfully" Mar 17 17:42:41.580420 containerd[1471]: time="2025-03-17T17:42:41.580390206Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" Mar 17 17:42:41.580457 kubelet[2574]: I0317 17:42:41.580361 2574 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53" Mar 17 17:42:41.580489 containerd[1471]: time="2025-03-17T17:42:41.580474963Z" level=info msg="TearDown network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" successfully" Mar 17 17:42:41.580573 containerd[1471]: time="2025-03-17T17:42:41.580485963Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" returns successfully" Mar 17 17:42:41.580708 containerd[1471]: time="2025-03-17T17:42:41.580688561Z" level=info msg="StopPodSandbox for \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\"" Mar 17 17:42:41.580852 containerd[1471]: time="2025-03-17T17:42:41.580835308Z" level=info msg="Ensure that sandbox 23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53 in task-service has been cleanup successfully" Mar 17 17:42:41.581034 containerd[1471]: time="2025-03-17T17:42:41.581015131Z" level=info msg="TearDown network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\" successfully" Mar 17 17:42:41.581095 containerd[1471]: time="2025-03-17T17:42:41.581032074Z" level=info msg="StopPodSandbox for \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\" returns successfully" Mar 17 17:42:41.581296 containerd[1471]: time="2025-03-17T17:42:41.581275692Z" level=info msg="StopPodSandbox for \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\"" Mar 17 17:42:41.581375 containerd[1471]: time="2025-03-17T17:42:41.581359917Z" level=info msg="TearDown network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" successfully" Mar 17 17:42:41.581414 containerd[1471]: time="2025-03-17T17:42:41.581372892Z" level=info msg="StopPodSandbox for \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" returns successfully" Mar 17 17:42:41.581449 containerd[1471]: time="2025-03-17T17:42:41.581433030Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:42:41.581521 containerd[1471]: time="2025-03-17T17:42:41.581504790Z" level=info msg="TearDown network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" successfully" Mar 17 17:42:41.581551 containerd[1471]: time="2025-03-17T17:42:41.581519851Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" returns successfully" Mar 17 17:42:41.581767 containerd[1471]: time="2025-03-17T17:42:41.581747466Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\"" Mar 17 17:42:41.581845 containerd[1471]: time="2025-03-17T17:42:41.581830629Z" level=info msg="TearDown network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" successfully" Mar 17 17:42:41.581882 containerd[1471]: time="2025-03-17T17:42:41.581843514Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" returns successfully" Mar 17 17:42:41.581915 containerd[1471]: time="2025-03-17T17:42:41.581898573Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:42:41.581990 containerd[1471]: time="2025-03-17T17:42:41.581974040Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:42:41.582019 containerd[1471]: time="2025-03-17T17:42:41.581988449Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:42:41.583236 containerd[1471]: time="2025-03-17T17:42:41.583193310Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" Mar 17 17:42:41.583347 containerd[1471]: time="2025-03-17T17:42:41.583233600Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:6,}" Mar 17 17:42:41.583347 containerd[1471]: time="2025-03-17T17:42:41.583288658Z" level=info msg="TearDown network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" successfully" Mar 17 17:42:41.583347 containerd[1471]: time="2025-03-17T17:42:41.583300591Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" returns successfully" Mar 17 17:42:41.583515 containerd[1471]: time="2025-03-17T17:42:41.583492938Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:42:41.583601 containerd[1471]: time="2025-03-17T17:42:41.583582975Z" level=info msg="TearDown network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" successfully" Mar 17 17:42:41.583640 containerd[1471]: time="2025-03-17T17:42:41.583599016Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" returns successfully" Mar 17 17:42:41.583877 containerd[1471]: time="2025-03-17T17:42:41.583850729Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:42:41.583960 containerd[1471]: time="2025-03-17T17:42:41.583938722Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:42:41.583960 containerd[1471]: time="2025-03-17T17:42:41.583951697Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:42:41.584207 kubelet[2574]: E0317 17:42:41.584177 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:41.584399 containerd[1471]: time="2025-03-17T17:42:41.584363294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:6,}" Mar 17 17:42:41.631745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53-shm.mount: Deactivated successfully. Mar 17 17:42:41.631857 systemd[1]: run-netns-cni\x2db01b9c30\x2d69e5\x2d6807\x2d1af5\x2d9a0bae7f02c1.mount: Deactivated successfully. Mar 17 17:42:41.631936 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956-shm.mount: Deactivated successfully. Mar 17 17:42:41.632009 systemd[1]: run-netns-cni\x2dd7d62765\x2d72a5\x2d19bb\x2d622a\x2de468df93e145.mount: Deactivated successfully. Mar 17 17:42:41.632122 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393-shm.mount: Deactivated successfully. Mar 17 17:42:41.804999 kubelet[2574]: I0317 17:42:41.804903 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-n5pnx" podStartSLOduration=1.783393018 podStartE2EDuration="26.804888036s" podCreationTimestamp="2025-03-17 17:42:15 +0000 UTC" firstStartedPulling="2025-03-17 17:42:16.075538124 +0000 UTC m=+13.235959612" lastFinishedPulling="2025-03-17 17:42:41.097033142 +0000 UTC m=+38.257454630" observedRunningTime="2025-03-17 17:42:41.804195128 +0000 UTC m=+38.964616626" watchObservedRunningTime="2025-03-17 17:42:41.804888036 +0000 UTC m=+38.965309524" Mar 17 17:42:42.587387 kubelet[2574]: E0317 17:42:42.587349 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:42.758798 systemd-networkd[1400]: calibe755546ec4: Link UP Mar 17 17:42:42.759184 systemd-networkd[1400]: calibe755546ec4: Gained carrier Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.591 [INFO][4882] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.620 [INFO][4882] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0 calico-apiserver-b6b5f678f- calico-apiserver e61bbec9-65b1-4228-aa40-669eba7841ea 708 0 2025-03-17 17:42:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b6b5f678f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b6b5f678f-lhw82 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe755546ec4 [] []}} ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.620 [INFO][4882] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.704 [INFO][4960] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" HandleID="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Workload="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.718 [INFO][4960] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" HandleID="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Workload="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ddea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b6b5f678f-lhw82", "timestamp":"2025-03-17 17:42:42.70483597 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.718 [INFO][4960] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.718 [INFO][4960] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.718 [INFO][4960] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.722 [INFO][4960] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.728 [INFO][4960] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.732 [INFO][4960] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.734 [INFO][4960] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.736 [INFO][4960] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.736 [INFO][4960] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.738 [INFO][4960] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32 Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.742 [INFO][4960] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.747 [INFO][4960] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.747 [INFO][4960] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" host="localhost" Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.747 [INFO][4960] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:42:42.772739 containerd[1471]: 2025-03-17 17:42:42.747 [INFO][4960] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" HandleID="k8s-pod-network.d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Workload="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" Mar 17 17:42:42.774128 containerd[1471]: 2025-03-17 17:42:42.751 [INFO][4882] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0", GenerateName:"calico-apiserver-b6b5f678f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e61bbec9-65b1-4228-aa40-669eba7841ea", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6b5f678f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b6b5f678f-lhw82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe755546ec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:42.774128 containerd[1471]: 2025-03-17 17:42:42.751 [INFO][4882] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" Mar 17 17:42:42.774128 containerd[1471]: 2025-03-17 17:42:42.751 [INFO][4882] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe755546ec4 ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" Mar 17 17:42:42.774128 containerd[1471]: 2025-03-17 17:42:42.759 [INFO][4882] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" Mar 17 17:42:42.774128 containerd[1471]: 2025-03-17 17:42:42.760 [INFO][4882] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0", GenerateName:"calico-apiserver-b6b5f678f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e61bbec9-65b1-4228-aa40-669eba7841ea", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6b5f678f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32", Pod:"calico-apiserver-b6b5f678f-lhw82", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe755546ec4", MAC:"ce:29:ab:dd:55:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:42.774128 containerd[1471]: 2025-03-17 17:42:42.770 [INFO][4882] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-lhw82" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--lhw82-eth0" Mar 17 17:42:42.826896 containerd[1471]: time="2025-03-17T17:42:42.826770096Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:42.826896 containerd[1471]: time="2025-03-17T17:42:42.826864770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:42.826896 containerd[1471]: time="2025-03-17T17:42:42.826882716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:42.827149 containerd[1471]: time="2025-03-17T17:42:42.826970908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:42.862358 systemd[1]: Started cri-containerd-d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32.scope - libcontainer container d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32. Mar 17 17:42:42.863197 systemd-networkd[1400]: calia175a3a13cb: Link UP Mar 17 17:42:42.863417 systemd-networkd[1400]: calia175a3a13cb: Gained carrier Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.552 [INFO][4853] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.571 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0 calico-kube-controllers-8b46cd865- calico-system d2c44e13-0cac-42de-9897-344a553902e4 711 0 2025-03-17 17:42:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8b46cd865 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-8b46cd865-7kxzr eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia175a3a13cb [] []}} ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.571 [INFO][4853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.704 [INFO][4914] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" HandleID="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Workload="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.719 [INFO][4914] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" HandleID="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Workload="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375ae0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8b46cd865-7kxzr", "timestamp":"2025-03-17 17:42:42.703162723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.720 [INFO][4914] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.747 [INFO][4914] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.747 [INFO][4914] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.824 [INFO][4914] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.830 [INFO][4914] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.836 [INFO][4914] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.838 [INFO][4914] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.840 [INFO][4914] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.840 [INFO][4914] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.842 [INFO][4914] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3 Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.849 [INFO][4914] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.856 [INFO][4914] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.856 [INFO][4914] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" host="localhost" Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.856 [INFO][4914] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:42:42.874806 containerd[1471]: 2025-03-17 17:42:42.856 [INFO][4914] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" HandleID="k8s-pod-network.acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Workload="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" Mar 17 17:42:42.875411 containerd[1471]: 2025-03-17 17:42:42.860 [INFO][4853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0", GenerateName:"calico-kube-controllers-8b46cd865-", Namespace:"calico-system", SelfLink:"", UID:"d2c44e13-0cac-42de-9897-344a553902e4", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b46cd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8b46cd865-7kxzr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia175a3a13cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:42.875411 containerd[1471]: 2025-03-17 17:42:42.860 [INFO][4853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" Mar 17 17:42:42.875411 containerd[1471]: 2025-03-17 17:42:42.860 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia175a3a13cb ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" Mar 17 17:42:42.875411 containerd[1471]: 2025-03-17 17:42:42.862 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" Mar 17 17:42:42.875411 containerd[1471]: 2025-03-17 17:42:42.863 [INFO][4853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0", GenerateName:"calico-kube-controllers-8b46cd865-", Namespace:"calico-system", SelfLink:"", UID:"d2c44e13-0cac-42de-9897-344a553902e4", ResourceVersion:"711", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8b46cd865", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3", Pod:"calico-kube-controllers-8b46cd865-7kxzr", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia175a3a13cb", MAC:"82:34:f2:8b:da:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:42.875411 containerd[1471]: 2025-03-17 17:42:42.872 [INFO][4853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3" Namespace="calico-system" Pod="calico-kube-controllers-8b46cd865-7kxzr" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8b46cd865--7kxzr-eth0" Mar 17 17:42:42.881821 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:42:42.897326 containerd[1471]: time="2025-03-17T17:42:42.896885901Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:42.897326 containerd[1471]: time="2025-03-17T17:42:42.896946610Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:42.897326 containerd[1471]: time="2025-03-17T17:42:42.896960187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:42.897326 containerd[1471]: time="2025-03-17T17:42:42.897039321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:42.911753 containerd[1471]: time="2025-03-17T17:42:42.911699396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-lhw82,Uid:e61bbec9-65b1-4228-aa40-669eba7841ea,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32\"" Mar 17 17:42:42.913849 containerd[1471]: time="2025-03-17T17:42:42.913805069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:42:42.927399 systemd[1]: Started cri-containerd-acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3.scope - libcontainer container acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3. Mar 17 17:42:42.944011 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:42:42.957384 systemd-networkd[1400]: calid203adfdb20: Link UP Mar 17 17:42:42.958449 systemd-networkd[1400]: calid203adfdb20: Gained carrier Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.556 [INFO][4863] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.576 [INFO][4863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--hg76m-eth0 coredns-6f6b679f8f- kube-system 7b8c3717-fa53-41b7-bf24-e6ae52b8b921 713 0 2025-03-17 17:42:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-hg76m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid203adfdb20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.576 [INFO][4863] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.703 [INFO][4919] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" HandleID="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Workload="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.725 [INFO][4919] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" HandleID="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Workload="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fb590), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-hg76m", "timestamp":"2025-03-17 17:42:42.703045354 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.725 [INFO][4919] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.856 [INFO][4919] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.856 [INFO][4919] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.923 [INFO][4919] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.930 [INFO][4919] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.936 [INFO][4919] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.938 [INFO][4919] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.940 [INFO][4919] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.940 [INFO][4919] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.941 [INFO][4919] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6 Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.947 [INFO][4919] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.951 [INFO][4919] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.951 [INFO][4919] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" host="localhost" Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.951 [INFO][4919] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:42:42.973900 containerd[1471]: 2025-03-17 17:42:42.951 [INFO][4919] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" HandleID="k8s-pod-network.cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Workload="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" Mar 17 17:42:42.974698 containerd[1471]: 2025-03-17 17:42:42.954 [INFO][4863] cni-plugin/k8s.go 386: Populated endpoint ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hg76m-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7b8c3717-fa53-41b7-bf24-e6ae52b8b921", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-hg76m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid203adfdb20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:42.974698 containerd[1471]: 2025-03-17 17:42:42.955 [INFO][4863] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" Mar 17 17:42:42.974698 containerd[1471]: 2025-03-17 17:42:42.955 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid203adfdb20 ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" Mar 17 17:42:42.974698 containerd[1471]: 2025-03-17 17:42:42.958 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" Mar 17 17:42:42.974698 containerd[1471]: 2025-03-17 17:42:42.959 [INFO][4863] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--hg76m-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7b8c3717-fa53-41b7-bf24-e6ae52b8b921", ResourceVersion:"713", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6", Pod:"coredns-6f6b679f8f-hg76m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid203adfdb20", MAC:"7e:cb:d1:8e:4d:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:42.974698 containerd[1471]: 2025-03-17 17:42:42.971 [INFO][4863] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6" Namespace="kube-system" Pod="coredns-6f6b679f8f-hg76m" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--hg76m-eth0" Mar 17 17:42:42.977849 containerd[1471]: time="2025-03-17T17:42:42.977460215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8b46cd865-7kxzr,Uid:d2c44e13-0cac-42de-9897-344a553902e4,Namespace:calico-system,Attempt:6,} returns sandbox id \"acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3\"" Mar 17 17:42:42.999228 containerd[1471]: time="2025-03-17T17:42:42.997945044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:42.999228 containerd[1471]: time="2025-03-17T17:42:42.998038858Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:42.999228 containerd[1471]: time="2025-03-17T17:42:42.998077523Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:42.999228 containerd[1471]: time="2025-03-17T17:42:42.998295640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:43.022240 systemd[1]: Started cri-containerd-cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6.scope - libcontainer container cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6. Mar 17 17:42:43.041415 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:42:43.069823 containerd[1471]: time="2025-03-17T17:42:43.069766455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hg76m,Uid:7b8c3717-fa53-41b7-bf24-e6ae52b8b921,Namespace:kube-system,Attempt:6,} returns sandbox id \"cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6\"" Mar 17 17:42:43.070646 kubelet[2574]: E0317 17:42:43.070623 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:43.072457 containerd[1471]: time="2025-03-17T17:42:43.072428883Z" level=info msg="CreateContainer within sandbox \"cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:42:43.091426 systemd-networkd[1400]: calibe58a1540b1: Link UP Mar 17 17:42:43.091659 systemd-networkd[1400]: calibe58a1540b1: Gained carrier Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.512 [INFO][4811] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.592 [INFO][4811] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0 calico-apiserver-b6b5f678f- calico-apiserver e2974f51-cab2-451b-8e3d-274fab1b872e 704 0 2025-03-17 17:42:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b6b5f678f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-b6b5f678f-v8plm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe58a1540b1 [] []}} ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.592 [INFO][4811] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.704 [INFO][4926] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" HandleID="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Workload="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.727 [INFO][4926] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" HandleID="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Workload="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00043dc70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-b6b5f678f-v8plm", "timestamp":"2025-03-17 17:42:42.704307576 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.727 [INFO][4926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.951 [INFO][4926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:42.951 [INFO][4926] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.025 [INFO][4926] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.032 [INFO][4926] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.036 [INFO][4926] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.038 [INFO][4926] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.041 [INFO][4926] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.041 [INFO][4926] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.043 [INFO][4926] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2 Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.065 [INFO][4926] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.085 [INFO][4926] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.085 [INFO][4926] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" host="localhost" Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.085 [INFO][4926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:42:43.104045 containerd[1471]: 2025-03-17 17:42:43.085 [INFO][4926] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" HandleID="k8s-pod-network.16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Workload="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" Mar 17 17:42:43.104633 containerd[1471]: 2025-03-17 17:42:43.088 [INFO][4811] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0", GenerateName:"calico-apiserver-b6b5f678f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2974f51-cab2-451b-8e3d-274fab1b872e", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6b5f678f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-b6b5f678f-v8plm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe58a1540b1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:43.104633 containerd[1471]: 2025-03-17 17:42:43.088 [INFO][4811] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" Mar 17 17:42:43.104633 containerd[1471]: 2025-03-17 17:42:43.088 [INFO][4811] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe58a1540b1 ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" Mar 17 17:42:43.104633 containerd[1471]: 2025-03-17 17:42:43.090 [INFO][4811] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" Mar 17 17:42:43.104633 containerd[1471]: 2025-03-17 17:42:43.091 [INFO][4811] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0", GenerateName:"calico-apiserver-b6b5f678f-", Namespace:"calico-apiserver", SelfLink:"", UID:"e2974f51-cab2-451b-8e3d-274fab1b872e", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b6b5f678f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2", Pod:"calico-apiserver-b6b5f678f-v8plm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe58a1540b1", MAC:"52:89:63:e9:96:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:43.104633 containerd[1471]: 2025-03-17 17:42:43.101 [INFO][4811] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2" Namespace="calico-apiserver" Pod="calico-apiserver-b6b5f678f-v8plm" WorkloadEndpoint="localhost-k8s-calico--apiserver--b6b5f678f--v8plm-eth0" Mar 17 17:42:43.182189 containerd[1471]: time="2025-03-17T17:42:43.181861550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:43.182189 containerd[1471]: time="2025-03-17T17:42:43.181939744Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:43.182189 containerd[1471]: time="2025-03-17T17:42:43.181953531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:43.182189 containerd[1471]: time="2025-03-17T17:42:43.182040551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:43.203227 systemd[1]: Started cri-containerd-16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2.scope - libcontainer container 16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2. Mar 17 17:42:43.216357 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:42:43.242479 containerd[1471]: time="2025-03-17T17:42:43.242428383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b6b5f678f-v8plm,Uid:e2974f51-cab2-451b-8e3d-274fab1b872e,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2\"" Mar 17 17:42:43.266196 containerd[1471]: time="2025-03-17T17:42:43.266138146Z" level=info msg="CreateContainer within sandbox \"cd0851c5815127c220eca80ce628b0a2d08f45e1ead0147ac0c07b97dfea34e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"914618548149361846bcf23ded739f755b0df38c0be656d6e9bd6137c9fd26a1\"" Mar 17 17:42:43.267029 containerd[1471]: time="2025-03-17T17:42:43.266995964Z" level=info msg="StartContainer for \"914618548149361846bcf23ded739f755b0df38c0be656d6e9bd6137c9fd26a1\"" Mar 17 17:42:43.299318 systemd[1]: Started cri-containerd-914618548149361846bcf23ded739f755b0df38c0be656d6e9bd6137c9fd26a1.scope - libcontainer container 914618548149361846bcf23ded739f755b0df38c0be656d6e9bd6137c9fd26a1. Mar 17 17:42:43.308207 systemd-networkd[1400]: calia54062f4af9: Link UP Mar 17 17:42:43.309403 systemd-networkd[1400]: calia54062f4af9: Gained carrier Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:42.543 [INFO][4842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:42.571 [INFO][4842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--lsjhx-eth0 csi-node-driver- calico-system c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d 607 0 2025-03-17 17:42:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:568c96974f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-lsjhx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia54062f4af9 [] []}} ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:42.571 [INFO][4842] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-eth0" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:42.718 [INFO][4911] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" HandleID="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Workload="localhost-k8s-csi--node--driver--lsjhx-eth0" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:42.728 [INFO][4911] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" HandleID="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Workload="localhost-k8s-csi--node--driver--lsjhx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036d890), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-lsjhx", "timestamp":"2025-03-17 17:42:42.718377594 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:42.728 [INFO][4911] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.085 [INFO][4911] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.085 [INFO][4911] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.257 [INFO][4911] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.266 [INFO][4911] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.275 [INFO][4911] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.277 [INFO][4911] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.279 [INFO][4911] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.279 [INFO][4911] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.280 [INFO][4911] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467 Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.286 [INFO][4911] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.296 [INFO][4911] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.296 [INFO][4911] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" host="localhost" Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.296 [INFO][4911] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:42:43.324953 containerd[1471]: 2025-03-17 17:42:43.296 [INFO][4911] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" HandleID="k8s-pod-network.6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Workload="localhost-k8s-csi--node--driver--lsjhx-eth0" Mar 17 17:42:43.326178 containerd[1471]: 2025-03-17 17:42:43.301 [INFO][4842] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lsjhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d", ResourceVersion:"607", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-lsjhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia54062f4af9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:43.326178 containerd[1471]: 2025-03-17 17:42:43.301 [INFO][4842] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-eth0" Mar 17 17:42:43.326178 containerd[1471]: 2025-03-17 17:42:43.301 [INFO][4842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia54062f4af9 ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-eth0" Mar 17 17:42:43.326178 containerd[1471]: 2025-03-17 17:42:43.309 [INFO][4842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-eth0" Mar 17 17:42:43.326178 containerd[1471]: 2025-03-17 17:42:43.310 [INFO][4842] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--lsjhx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d", ResourceVersion:"607", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"568c96974f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467", Pod:"csi-node-driver-lsjhx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia54062f4af9", MAC:"6e:80:c1:d8:38:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:43.326178 containerd[1471]: 2025-03-17 17:42:43.322 [INFO][4842] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467" Namespace="calico-system" Pod="csi-node-driver-lsjhx" WorkloadEndpoint="localhost-k8s-csi--node--driver--lsjhx-eth0" Mar 17 17:42:43.344755 containerd[1471]: time="2025-03-17T17:42:43.344709594Z" level=info msg="StartContainer for \"914618548149361846bcf23ded739f755b0df38c0be656d6e9bd6137c9fd26a1\" returns successfully" Mar 17 17:42:43.353580 containerd[1471]: time="2025-03-17T17:42:43.353358590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:43.353580 containerd[1471]: time="2025-03-17T17:42:43.353429969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:43.353580 containerd[1471]: time="2025-03-17T17:42:43.353442925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:43.353580 containerd[1471]: time="2025-03-17T17:42:43.353534264Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:43.377306 systemd[1]: Started cri-containerd-6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467.scope - libcontainer container 6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467. Mar 17 17:42:43.393547 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:42:43.410758 systemd-networkd[1400]: cali696200b89cd: Link UP Mar 17 17:42:43.413545 systemd-networkd[1400]: cali696200b89cd: Gained carrier Mar 17 17:42:43.421460 containerd[1471]: time="2025-03-17T17:42:43.421418096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lsjhx,Uid:c4ec38d1-6c2f-488c-8f17-eb6d6aa4990d,Namespace:calico-system,Attempt:6,} returns sandbox id \"6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467\"" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:42.542 [INFO][4818] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:42.574 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0 coredns-6f6b679f8f- kube-system 5d480ab2-f501-4510-9ec1-d051e760e88d 701 0 2025-03-17 17:42:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-6f6b679f8f-b7tm9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali696200b89cd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:42.574 [INFO][4818] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:42.700 [INFO][4920] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" HandleID="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Workload="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:42.729 [INFO][4920] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" HandleID="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Workload="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003f48a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-b7tm9", "timestamp":"2025-03-17 17:42:42.700246704 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:42.729 [INFO][4920] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.297 [INFO][4920] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.298 [INFO][4920] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.325 [INFO][4920] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.367 [INFO][4920] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.374 [INFO][4920] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.376 [INFO][4920] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.378 [INFO][4920] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.378 [INFO][4920] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.380 [INFO][4920] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.384 [INFO][4920] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.396 [INFO][4920] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.396 [INFO][4920] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" host="localhost" Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.396 [INFO][4920] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:42:43.430188 containerd[1471]: 2025-03-17 17:42:43.396 [INFO][4920] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" HandleID="k8s-pod-network.20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Workload="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" Mar 17 17:42:43.430993 containerd[1471]: 2025-03-17 17:42:43.401 [INFO][4818] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5d480ab2-f501-4510-9ec1-d051e760e88d", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-b7tm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali696200b89cd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:43.430993 containerd[1471]: 2025-03-17 17:42:43.401 [INFO][4818] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" Mar 17 17:42:43.430993 containerd[1471]: 2025-03-17 17:42:43.401 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali696200b89cd ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" Mar 17 17:42:43.430993 containerd[1471]: 2025-03-17 17:42:43.412 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" Mar 17 17:42:43.430993 containerd[1471]: 2025-03-17 17:42:43.412 [INFO][4818] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"5d480ab2-f501-4510-9ec1-d051e760e88d", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 42, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce", Pod:"coredns-6f6b679f8f-b7tm9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali696200b89cd", MAC:"4e:e6:1b:b8:0e:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:42:43.430993 containerd[1471]: 2025-03-17 17:42:43.426 [INFO][4818] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce" Namespace="kube-system" Pod="coredns-6f6b679f8f-b7tm9" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--b7tm9-eth0" Mar 17 17:42:43.461893 containerd[1471]: time="2025-03-17T17:42:43.461782454Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:42:43.461893 containerd[1471]: time="2025-03-17T17:42:43.461843875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:42:43.461893 containerd[1471]: time="2025-03-17T17:42:43.461857232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:43.462144 containerd[1471]: time="2025-03-17T17:42:43.461933190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:42:43.482341 systemd[1]: Started cri-containerd-20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce.scope - libcontainer container 20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce. Mar 17 17:42:43.497364 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:42:43.523028 containerd[1471]: time="2025-03-17T17:42:43.522979560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-b7tm9,Uid:5d480ab2-f501-4510-9ec1-d051e760e88d,Namespace:kube-system,Attempt:6,} returns sandbox id \"20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce\"" Mar 17 17:42:43.523726 kubelet[2574]: E0317 17:42:43.523703 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:43.525652 containerd[1471]: time="2025-03-17T17:42:43.525624803Z" level=info msg="CreateContainer within sandbox \"20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:42:43.556006 containerd[1471]: time="2025-03-17T17:42:43.555948203Z" level=info msg="CreateContainer within sandbox \"20a30ebd27e149f69a58facef2dde82787709a5166732b646f66ac07f1860bce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f2778c0bfc87cc24859285c4033426fee08e0d6a3ccdf7b82a027816de42519\"" Mar 17 17:42:43.559778 containerd[1471]: time="2025-03-17T17:42:43.556886448Z" level=info msg="StartContainer for \"0f2778c0bfc87cc24859285c4033426fee08e0d6a3ccdf7b82a027816de42519\"" Mar 17 17:42:43.603350 systemd[1]: Started cri-containerd-0f2778c0bfc87cc24859285c4033426fee08e0d6a3ccdf7b82a027816de42519.scope - libcontainer container 0f2778c0bfc87cc24859285c4033426fee08e0d6a3ccdf7b82a027816de42519. Mar 17 17:42:43.653590 kubelet[2574]: E0317 17:42:43.653537 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:43.941203 containerd[1471]: time="2025-03-17T17:42:43.940100572Z" level=info msg="StartContainer for \"0f2778c0bfc87cc24859285c4033426fee08e0d6a3ccdf7b82a027816de42519\" returns successfully" Mar 17 17:42:44.016996 kubelet[2574]: I0317 17:42:44.016940 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hg76m" podStartSLOduration=37.016922436 podStartE2EDuration="37.016922436s" podCreationTimestamp="2025-03-17 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:42:43.862447862 +0000 UTC m=+41.022869370" watchObservedRunningTime="2025-03-17 17:42:44.016922436 +0000 UTC m=+41.177343914" Mar 17 17:42:44.038226 systemd-networkd[1400]: calibe755546ec4: Gained IPv6LL Mar 17 17:42:44.098095 kernel: bpftool[5511]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:42:44.108300 systemd[1]: Started sshd@10-10.0.0.46:22-10.0.0.1:40194.service - OpenSSH per-connection server daemon (10.0.0.1:40194). Mar 17 17:42:44.205939 sshd[5513]: Accepted publickey for core from 10.0.0.1 port 40194 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:42:44.207684 sshd-session[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:44.211873 systemd-logind[1456]: New session 10 of user core. Mar 17 17:42:44.219195 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:42:44.358242 systemd-networkd[1400]: calia175a3a13cb: Gained IPv6LL Mar 17 17:42:44.375369 systemd-networkd[1400]: vxlan.calico: Link UP Mar 17 17:42:44.375378 systemd-networkd[1400]: vxlan.calico: Gained carrier Mar 17 17:42:44.393245 sshd[5516]: Connection closed by 10.0.0.1 port 40194 Mar 17 17:42:44.394833 sshd-session[5513]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:44.398591 systemd[1]: sshd@10-10.0.0.46:22-10.0.0.1:40194.service: Deactivated successfully. Mar 17 17:42:44.401452 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:42:44.402226 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:42:44.403647 systemd-logind[1456]: Removed session 10. Mar 17 17:42:44.742226 systemd-networkd[1400]: calid203adfdb20: Gained IPv6LL Mar 17 17:42:44.806233 systemd-networkd[1400]: cali696200b89cd: Gained IPv6LL Mar 17 17:42:44.973904 kubelet[2574]: E0317 17:42:44.973624 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:44.977133 kubelet[2574]: E0317 17:42:44.976931 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:44.988548 kubelet[2574]: I0317 17:42:44.987952 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-b7tm9" podStartSLOduration=37.987936949 podStartE2EDuration="37.987936949s" podCreationTimestamp="2025-03-17 17:42:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:42:44.987157737 +0000 UTC m=+42.147579245" watchObservedRunningTime="2025-03-17 17:42:44.987936949 +0000 UTC m=+42.148358437" Mar 17 17:42:45.062220 systemd-networkd[1400]: calibe58a1540b1: Gained IPv6LL Mar 17 17:42:45.190256 systemd-networkd[1400]: calia54062f4af9: Gained IPv6LL Mar 17 17:42:45.510281 systemd-networkd[1400]: vxlan.calico: Gained IPv6LL Mar 17 17:42:45.976807 kubelet[2574]: E0317 17:42:45.975788 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:45.976807 kubelet[2574]: E0317 17:42:45.975847 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:46.977334 kubelet[2574]: E0317 17:42:46.977291 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:47.774360 containerd[1471]: time="2025-03-17T17:42:47.774291283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:47.775219 containerd[1471]: time="2025-03-17T17:42:47.775147591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 17 17:42:47.776564 containerd[1471]: time="2025-03-17T17:42:47.776523240Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:47.778959 containerd[1471]: time="2025-03-17T17:42:47.778909539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:47.779763 containerd[1471]: time="2025-03-17T17:42:47.779722654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 4.865886904s" Mar 17 17:42:47.779763 containerd[1471]: time="2025-03-17T17:42:47.779757692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:42:47.781155 containerd[1471]: time="2025-03-17T17:42:47.781112061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:42:47.782325 containerd[1471]: time="2025-03-17T17:42:47.782283302Z" level=info msg="CreateContainer within sandbox \"d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:42:47.806809 containerd[1471]: time="2025-03-17T17:42:47.806766308Z" level=info msg="CreateContainer within sandbox \"d2de2f675da123eddaf33558bc56585b5f588cf685cfb0810e95762a63e53e32\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9fad16e4e4e4b726ce0adfdca7aced0a862bda82b97c32fdd98a95a71bc546d8\"" Mar 17 17:42:47.807576 containerd[1471]: time="2025-03-17T17:42:47.807536499Z" level=info msg="StartContainer for \"9fad16e4e4e4b726ce0adfdca7aced0a862bda82b97c32fdd98a95a71bc546d8\"" Mar 17 17:42:47.843302 systemd[1]: Started cri-containerd-9fad16e4e4e4b726ce0adfdca7aced0a862bda82b97c32fdd98a95a71bc546d8.scope - libcontainer container 9fad16e4e4e4b726ce0adfdca7aced0a862bda82b97c32fdd98a95a71bc546d8. Mar 17 17:42:47.895390 containerd[1471]: time="2025-03-17T17:42:47.895314351Z" level=info msg="StartContainer for \"9fad16e4e4e4b726ce0adfdca7aced0a862bda82b97c32fdd98a95a71bc546d8\" returns successfully" Mar 17 17:42:47.992720 kubelet[2574]: I0317 17:42:47.992458 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b6b5f678f-lhw82" podStartSLOduration=28.124820982 podStartE2EDuration="32.992440046s" podCreationTimestamp="2025-03-17 17:42:15 +0000 UTC" firstStartedPulling="2025-03-17 17:42:42.913298347 +0000 UTC m=+40.073719835" lastFinishedPulling="2025-03-17 17:42:47.780917401 +0000 UTC m=+44.941338899" observedRunningTime="2025-03-17 17:42:47.991601792 +0000 UTC m=+45.152023280" watchObservedRunningTime="2025-03-17 17:42:47.992440046 +0000 UTC m=+45.152861534" Mar 17 17:42:48.983032 kubelet[2574]: I0317 17:42:48.982973 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:42:49.412332 systemd[1]: Started sshd@11-10.0.0.46:22-10.0.0.1:41436.service - OpenSSH per-connection server daemon (10.0.0.1:41436). Mar 17 17:42:49.519009 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 41436 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:42:49.520857 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:49.525130 systemd-logind[1456]: New session 11 of user core. Mar 17 17:42:49.534222 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:42:49.836042 sshd[5670]: Connection closed by 10.0.0.1 port 41436 Mar 17 17:42:49.837885 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:49.842672 systemd[1]: sshd@11-10.0.0.46:22-10.0.0.1:41436.service: Deactivated successfully. Mar 17 17:42:49.844702 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:42:49.846971 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:42:49.848848 systemd-logind[1456]: Removed session 11. Mar 17 17:42:50.484190 containerd[1471]: time="2025-03-17T17:42:50.484131070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:50.485107 containerd[1471]: time="2025-03-17T17:42:50.485048904Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 17 17:42:50.486274 containerd[1471]: time="2025-03-17T17:42:50.486243905Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:50.488723 containerd[1471]: time="2025-03-17T17:42:50.488691401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:50.489496 containerd[1471]: time="2025-03-17T17:42:50.489464903Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.708308327s" Mar 17 17:42:50.489543 containerd[1471]: time="2025-03-17T17:42:50.489496725Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 17 17:42:50.490988 containerd[1471]: time="2025-03-17T17:42:50.490445809Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:42:50.498554 containerd[1471]: time="2025-03-17T17:42:50.498518434Z" level=info msg="CreateContainer within sandbox \"acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:42:50.519639 containerd[1471]: time="2025-03-17T17:42:50.519593331Z" level=info msg="CreateContainer within sandbox \"acf5278ca9b9fcba911d11f742e924442c6e5d64748bfc6391ca85bc664ff4f3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ff4382f83d73a48f55547a59547bb44baf96dbdd7fbf633b78ca77021bd96383\"" Mar 17 17:42:50.520134 containerd[1471]: time="2025-03-17T17:42:50.520106679Z" level=info msg="StartContainer for \"ff4382f83d73a48f55547a59547bb44baf96dbdd7fbf633b78ca77021bd96383\"" Mar 17 17:42:50.552245 systemd[1]: Started cri-containerd-ff4382f83d73a48f55547a59547bb44baf96dbdd7fbf633b78ca77021bd96383.scope - libcontainer container ff4382f83d73a48f55547a59547bb44baf96dbdd7fbf633b78ca77021bd96383. Mar 17 17:42:50.599491 containerd[1471]: time="2025-03-17T17:42:50.599434156Z" level=info msg="StartContainer for \"ff4382f83d73a48f55547a59547bb44baf96dbdd7fbf633b78ca77021bd96383\" returns successfully" Mar 17 17:42:51.038568 kubelet[2574]: I0317 17:42:51.038491 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8b46cd865-7kxzr" podStartSLOduration=28.527238927 podStartE2EDuration="36.038473275s" podCreationTimestamp="2025-03-17 17:42:15 +0000 UTC" firstStartedPulling="2025-03-17 17:42:42.979040559 +0000 UTC m=+40.139462047" lastFinishedPulling="2025-03-17 17:42:50.490274907 +0000 UTC m=+47.650696395" observedRunningTime="2025-03-17 17:42:51.037974948 +0000 UTC m=+48.198396436" watchObservedRunningTime="2025-03-17 17:42:51.038473275 +0000 UTC m=+48.198894763" Mar 17 17:42:51.325894 containerd[1471]: time="2025-03-17T17:42:51.325747741Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:51.327746 containerd[1471]: time="2025-03-17T17:42:51.327674631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:42:51.337962 containerd[1471]: time="2025-03-17T17:42:51.337927355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 847.451517ms" Mar 17 17:42:51.337962 containerd[1471]: time="2025-03-17T17:42:51.337956210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:42:51.338915 containerd[1471]: time="2025-03-17T17:42:51.338753628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:42:51.339994 containerd[1471]: time="2025-03-17T17:42:51.339960322Z" level=info msg="CreateContainer within sandbox \"16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:42:51.411302 containerd[1471]: time="2025-03-17T17:42:51.411237351Z" level=info msg="CreateContainer within sandbox \"16e9429e6a8b9b4210dbc389d4be0fe571ee3b55d35d0e2ae3a99a6640bb26e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d43bb7d8d56545e2350d7372879b71a9eae9f832e2c9080a5a62f5b492bbdc60\"" Mar 17 17:42:51.411870 containerd[1471]: time="2025-03-17T17:42:51.411815264Z" level=info msg="StartContainer for \"d43bb7d8d56545e2350d7372879b71a9eae9f832e2c9080a5a62f5b492bbdc60\"" Mar 17 17:42:51.441197 systemd[1]: Started cri-containerd-d43bb7d8d56545e2350d7372879b71a9eae9f832e2c9080a5a62f5b492bbdc60.scope - libcontainer container d43bb7d8d56545e2350d7372879b71a9eae9f832e2c9080a5a62f5b492bbdc60. Mar 17 17:42:51.522981 containerd[1471]: time="2025-03-17T17:42:51.522920302Z" level=info msg="StartContainer for \"d43bb7d8d56545e2350d7372879b71a9eae9f832e2c9080a5a62f5b492bbdc60\" returns successfully" Mar 17 17:42:51.996723 kubelet[2574]: I0317 17:42:51.996666 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:42:52.010657 kubelet[2574]: I0317 17:42:52.010462 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-b6b5f678f-v8plm" podStartSLOduration=28.918529475 podStartE2EDuration="37.010442523s" podCreationTimestamp="2025-03-17 17:42:15 +0000 UTC" firstStartedPulling="2025-03-17 17:42:43.246728423 +0000 UTC m=+40.407149911" lastFinishedPulling="2025-03-17 17:42:51.338641471 +0000 UTC m=+48.499062959" observedRunningTime="2025-03-17 17:42:52.010338392 +0000 UTC m=+49.170759880" watchObservedRunningTime="2025-03-17 17:42:52.010442523 +0000 UTC m=+49.170864011" Mar 17 17:42:53.266041 kubelet[2574]: E0317 17:42:53.266005 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:42:53.503971 containerd[1471]: time="2025-03-17T17:42:53.503896832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:53.505634 containerd[1471]: time="2025-03-17T17:42:53.505566759Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:42:53.507093 containerd[1471]: time="2025-03-17T17:42:53.507041828Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:53.509795 containerd[1471]: time="2025-03-17T17:42:53.509764175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:53.510364 containerd[1471]: time="2025-03-17T17:42:53.510320453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.171539891s" Mar 17 17:42:53.510364 containerd[1471]: time="2025-03-17T17:42:53.510363216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:42:53.512198 containerd[1471]: time="2025-03-17T17:42:53.512173735Z" level=info msg="CreateContainer within sandbox \"6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:42:53.538295 containerd[1471]: time="2025-03-17T17:42:53.538172575Z" level=info msg="CreateContainer within sandbox \"6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"3954fc7facceda3ece97355adb894060a86810e8558bda52783c493774517767\"" Mar 17 17:42:53.539270 containerd[1471]: time="2025-03-17T17:42:53.539244122Z" level=info msg="StartContainer for \"3954fc7facceda3ece97355adb894060a86810e8558bda52783c493774517767\"" Mar 17 17:42:53.574219 systemd[1]: Started cri-containerd-3954fc7facceda3ece97355adb894060a86810e8558bda52783c493774517767.scope - libcontainer container 3954fc7facceda3ece97355adb894060a86810e8558bda52783c493774517767. Mar 17 17:42:53.781261 containerd[1471]: time="2025-03-17T17:42:53.781211452Z" level=info msg="StartContainer for \"3954fc7facceda3ece97355adb894060a86810e8558bda52783c493774517767\" returns successfully" Mar 17 17:42:53.783135 containerd[1471]: time="2025-03-17T17:42:53.782338016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:42:54.849343 systemd[1]: Started sshd@12-10.0.0.46:22-10.0.0.1:41450.service - OpenSSH per-connection server daemon (10.0.0.1:41450). Mar 17 17:42:54.929210 sshd[5887]: Accepted publickey for core from 10.0.0.1 port 41450 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:42:54.934894 sshd-session[5887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:54.940528 systemd-logind[1456]: New session 12 of user core. Mar 17 17:42:54.950275 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:42:55.132228 sshd[5889]: Connection closed by 10.0.0.1 port 41450 Mar 17 17:42:55.132584 sshd-session[5887]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:55.142134 systemd[1]: sshd@12-10.0.0.46:22-10.0.0.1:41450.service: Deactivated successfully. Mar 17 17:42:55.144125 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:42:55.146049 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:42:55.155346 systemd[1]: Started sshd@13-10.0.0.46:22-10.0.0.1:41458.service - OpenSSH per-connection server daemon (10.0.0.1:41458). Mar 17 17:42:55.156278 systemd-logind[1456]: Removed session 12. Mar 17 17:42:55.194933 sshd[5902]: Accepted publickey for core from 10.0.0.1 port 41458 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:42:55.196780 sshd-session[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:55.201422 systemd-logind[1456]: New session 13 of user core. Mar 17 17:42:55.211186 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:42:55.462816 sshd[5904]: Connection closed by 10.0.0.1 port 41458 Mar 17 17:42:55.463964 sshd-session[5902]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:55.477714 systemd[1]: sshd@13-10.0.0.46:22-10.0.0.1:41458.service: Deactivated successfully. Mar 17 17:42:55.481827 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:42:55.485989 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:42:55.495522 systemd[1]: Started sshd@14-10.0.0.46:22-10.0.0.1:41466.service - OpenSSH per-connection server daemon (10.0.0.1:41466). Mar 17 17:42:55.497474 systemd-logind[1456]: Removed session 13. Mar 17 17:42:55.550628 sshd[5914]: Accepted publickey for core from 10.0.0.1 port 41466 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:42:55.552686 sshd-session[5914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:42:55.559242 systemd-logind[1456]: New session 14 of user core. Mar 17 17:42:55.569333 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:42:55.711775 sshd[5916]: Connection closed by 10.0.0.1 port 41466 Mar 17 17:42:55.712208 sshd-session[5914]: pam_unix(sshd:session): session closed for user core Mar 17 17:42:55.717240 systemd[1]: sshd@14-10.0.0.46:22-10.0.0.1:41466.service: Deactivated successfully. Mar 17 17:42:55.719570 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:42:55.720385 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:42:55.721443 systemd-logind[1456]: Removed session 14. Mar 17 17:42:55.987333 containerd[1471]: time="2025-03-17T17:42:55.987181049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:55.988397 containerd[1471]: time="2025-03-17T17:42:55.988341845Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:42:55.989923 containerd[1471]: time="2025-03-17T17:42:55.989866746Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:55.994086 containerd[1471]: time="2025-03-17T17:42:55.993030900Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:42:55.994788 containerd[1471]: time="2025-03-17T17:42:55.994761109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.212392873s" Mar 17 17:42:55.994875 containerd[1471]: time="2025-03-17T17:42:55.994860912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:42:55.997840 containerd[1471]: time="2025-03-17T17:42:55.997796873Z" level=info msg="CreateContainer within sandbox \"6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:42:56.015156 containerd[1471]: time="2025-03-17T17:42:56.015086174Z" level=info msg="CreateContainer within sandbox \"6fb662021b0e043ca70f6cde7b77ae5c9b334b2905c515b9bd17040f98e78467\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d1d1f670eb79e62d10d581eeeae9b22070dc3b42c41bc5e1ab161967369940b2\"" Mar 17 17:42:56.017338 containerd[1471]: time="2025-03-17T17:42:56.015610768Z" level=info msg="StartContainer for \"d1d1f670eb79e62d10d581eeeae9b22070dc3b42c41bc5e1ab161967369940b2\"" Mar 17 17:42:56.057268 systemd[1]: Started cri-containerd-d1d1f670eb79e62d10d581eeeae9b22070dc3b42c41bc5e1ab161967369940b2.scope - libcontainer container d1d1f670eb79e62d10d581eeeae9b22070dc3b42c41bc5e1ab161967369940b2. Mar 17 17:42:56.097098 containerd[1471]: time="2025-03-17T17:42:56.097010587Z" level=info msg="StartContainer for \"d1d1f670eb79e62d10d581eeeae9b22070dc3b42c41bc5e1ab161967369940b2\" returns successfully" Mar 17 17:42:57.153009 kubelet[2574]: I0317 17:42:57.152529 2574 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lsjhx" podStartSLOduration=29.579025582 podStartE2EDuration="42.152513016s" podCreationTimestamp="2025-03-17 17:42:15 +0000 UTC" firstStartedPulling="2025-03-17 17:42:43.422784028 +0000 UTC m=+40.583205516" lastFinishedPulling="2025-03-17 17:42:55.996271461 +0000 UTC m=+53.156692950" observedRunningTime="2025-03-17 17:42:57.152491344 +0000 UTC m=+54.312912832" watchObservedRunningTime="2025-03-17 17:42:57.152513016 +0000 UTC m=+54.312934504" Mar 17 17:42:57.430506 kubelet[2574]: I0317 17:42:57.430327 2574 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:42:57.430506 kubelet[2574]: I0317 17:42:57.430399 2574 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:43:00.730483 systemd[1]: Started sshd@15-10.0.0.46:22-10.0.0.1:34708.service - OpenSSH per-connection server daemon (10.0.0.1:34708). Mar 17 17:43:00.775930 sshd[5973]: Accepted publickey for core from 10.0.0.1 port 34708 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:00.778283 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:00.783055 systemd-logind[1456]: New session 15 of user core. Mar 17 17:43:00.792326 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:43:01.060401 sshd[5975]: Connection closed by 10.0.0.1 port 34708 Mar 17 17:43:01.060785 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:01.065847 systemd[1]: sshd@15-10.0.0.46:22-10.0.0.1:34708.service: Deactivated successfully. Mar 17 17:43:01.068692 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:43:01.069480 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:43:01.070614 systemd-logind[1456]: Removed session 15. Mar 17 17:43:02.920618 containerd[1471]: time="2025-03-17T17:43:02.920569982Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:43:02.921166 containerd[1471]: time="2025-03-17T17:43:02.920719380Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:43:02.921166 containerd[1471]: time="2025-03-17T17:43:02.920731704Z" level=info msg="StopPodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:43:02.926930 containerd[1471]: time="2025-03-17T17:43:02.926892001Z" level=info msg="RemovePodSandbox for \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:43:02.938771 containerd[1471]: time="2025-03-17T17:43:02.938714816Z" level=info msg="Forcibly stopping sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\"" Mar 17 17:43:02.938926 containerd[1471]: time="2025-03-17T17:43:02.938861359Z" level=info msg="TearDown network for sandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" successfully" Mar 17 17:43:03.097841 containerd[1471]: time="2025-03-17T17:43:03.097771539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.097997 containerd[1471]: time="2025-03-17T17:43:03.097858988Z" level=info msg="RemovePodSandbox \"1a12983a8841ddeac31ec562c0ce3b12e75f4c40488315d12dd610812c63ac70\" returns successfully" Mar 17 17:43:03.098452 containerd[1471]: time="2025-03-17T17:43:03.098416091Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:43:03.098576 containerd[1471]: time="2025-03-17T17:43:03.098554227Z" level=info msg="TearDown network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" successfully" Mar 17 17:43:03.098576 containerd[1471]: time="2025-03-17T17:43:03.098572322Z" level=info msg="StopPodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" returns successfully" Mar 17 17:43:03.098833 containerd[1471]: time="2025-03-17T17:43:03.098799278Z" level=info msg="RemovePodSandbox for \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:43:03.098833 containerd[1471]: time="2025-03-17T17:43:03.098823004Z" level=info msg="Forcibly stopping sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\"" Mar 17 17:43:03.098986 containerd[1471]: time="2025-03-17T17:43:03.098899622Z" level=info msg="TearDown network for sandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" successfully" Mar 17 17:43:03.281919 containerd[1471]: time="2025-03-17T17:43:03.281832424Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.281919 containerd[1471]: time="2025-03-17T17:43:03.281929672Z" level=info msg="RemovePodSandbox \"ea14d6d0447dbc5080d47a26d58a2bfba80bc5ebe62085775ca7fc4db9ecf3b3\" returns successfully" Mar 17 17:43:03.282542 containerd[1471]: time="2025-03-17T17:43:03.282500030Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" Mar 17 17:43:03.282723 containerd[1471]: time="2025-03-17T17:43:03.282688794Z" level=info msg="TearDown network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" successfully" Mar 17 17:43:03.282723 containerd[1471]: time="2025-03-17T17:43:03.282708752Z" level=info msg="StopPodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" returns successfully" Mar 17 17:43:03.283178 containerd[1471]: time="2025-03-17T17:43:03.283146285Z" level=info msg="RemovePodSandbox for \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" Mar 17 17:43:03.283178 containerd[1471]: time="2025-03-17T17:43:03.283172805Z" level=info msg="Forcibly stopping sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\"" Mar 17 17:43:03.283321 containerd[1471]: time="2025-03-17T17:43:03.283276795Z" level=info msg="TearDown network for sandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" successfully" Mar 17 17:43:03.363807 containerd[1471]: time="2025-03-17T17:43:03.363720755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.363807 containerd[1471]: time="2025-03-17T17:43:03.363797703Z" level=info msg="RemovePodSandbox \"04898b8b1a06024d8a321645a3aedaf17f255e75b4e26b3816a2c1df841abfaf\" returns successfully" Mar 17 17:43:03.364402 containerd[1471]: time="2025-03-17T17:43:03.364349336Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\"" Mar 17 17:43:03.364569 containerd[1471]: time="2025-03-17T17:43:03.364456051Z" level=info msg="TearDown network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" successfully" Mar 17 17:43:03.364569 containerd[1471]: time="2025-03-17T17:43:03.364467513Z" level=info msg="StopPodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" returns successfully" Mar 17 17:43:03.364744 containerd[1471]: time="2025-03-17T17:43:03.364713587Z" level=info msg="RemovePodSandbox for \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\"" Mar 17 17:43:03.364823 containerd[1471]: time="2025-03-17T17:43:03.364743615Z" level=info msg="Forcibly stopping sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\"" Mar 17 17:43:03.364888 containerd[1471]: time="2025-03-17T17:43:03.364840742Z" level=info msg="TearDown network for sandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" successfully" Mar 17 17:43:03.462620 containerd[1471]: time="2025-03-17T17:43:03.462557688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.462620 containerd[1471]: time="2025-03-17T17:43:03.462627732Z" level=info msg="RemovePodSandbox \"39172e784b07542f3f1eb6b9c87f2642270767685a89f397784b4fad77185966\" returns successfully" Mar 17 17:43:03.463252 containerd[1471]: time="2025-03-17T17:43:03.463204382Z" level=info msg="StopPodSandbox for \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\"" Mar 17 17:43:03.463395 containerd[1471]: time="2025-03-17T17:43:03.463343620Z" level=info msg="TearDown network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" successfully" Mar 17 17:43:03.463395 containerd[1471]: time="2025-03-17T17:43:03.463392164Z" level=info msg="StopPodSandbox for \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" returns successfully" Mar 17 17:43:03.463705 containerd[1471]: time="2025-03-17T17:43:03.463673135Z" level=info msg="RemovePodSandbox for \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\"" Mar 17 17:43:03.463705 containerd[1471]: time="2025-03-17T17:43:03.463697953Z" level=info msg="Forcibly stopping sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\"" Mar 17 17:43:03.463823 containerd[1471]: time="2025-03-17T17:43:03.463767868Z" level=info msg="TearDown network for sandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" successfully" Mar 17 17:43:03.614961 containerd[1471]: time="2025-03-17T17:43:03.614787888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.614961 containerd[1471]: time="2025-03-17T17:43:03.614867020Z" level=info msg="RemovePodSandbox \"4d0bb67187b44524b49f0720b91d54ad59f0d422d33a604c2f27a5360d15d8b6\" returns successfully" Mar 17 17:43:03.615607 containerd[1471]: time="2025-03-17T17:43:03.615581376Z" level=info msg="StopPodSandbox for \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\"" Mar 17 17:43:03.615736 containerd[1471]: time="2025-03-17T17:43:03.615707870Z" level=info msg="TearDown network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\" successfully" Mar 17 17:43:03.615736 containerd[1471]: time="2025-03-17T17:43:03.615727297Z" level=info msg="StopPodSandbox for \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\" returns successfully" Mar 17 17:43:03.616112 containerd[1471]: time="2025-03-17T17:43:03.616084385Z" level=info msg="RemovePodSandbox for \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\"" Mar 17 17:43:03.616199 containerd[1471]: time="2025-03-17T17:43:03.616111968Z" level=info msg="Forcibly stopping sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\"" Mar 17 17:43:03.616241 containerd[1471]: time="2025-03-17T17:43:03.616193735Z" level=info msg="TearDown network for sandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\" successfully" Mar 17 17:43:03.720744 containerd[1471]: time="2025-03-17T17:43:03.720677950Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.720744 containerd[1471]: time="2025-03-17T17:43:03.720758445Z" level=info msg="RemovePodSandbox \"876c4ff72854a799f7991bdbf3266cb4c842a820844d83e42161d08c217d6e45\" returns successfully" Mar 17 17:43:03.721241 containerd[1471]: time="2025-03-17T17:43:03.721211327Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:43:03.721400 containerd[1471]: time="2025-03-17T17:43:03.721328643Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:43:03.721400 containerd[1471]: time="2025-03-17T17:43:03.721338391Z" level=info msg="StopPodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:43:03.721595 containerd[1471]: time="2025-03-17T17:43:03.721573394Z" level=info msg="RemovePodSandbox for \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:43:03.721640 containerd[1471]: time="2025-03-17T17:43:03.721595376Z" level=info msg="Forcibly stopping sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\"" Mar 17 17:43:03.721700 containerd[1471]: time="2025-03-17T17:43:03.721661855Z" level=info msg="TearDown network for sandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" successfully" Mar 17 17:43:03.791747 containerd[1471]: time="2025-03-17T17:43:03.791679921Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.791747 containerd[1471]: time="2025-03-17T17:43:03.791744565Z" level=info msg="RemovePodSandbox \"2e196fcd9f9243b223018639d31a93ff3f9775e305ad1408354381cea90249e5\" returns successfully" Mar 17 17:43:03.792357 containerd[1471]: time="2025-03-17T17:43:03.792305426Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:43:03.792492 containerd[1471]: time="2025-03-17T17:43:03.792467578Z" level=info msg="TearDown network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" successfully" Mar 17 17:43:03.792492 containerd[1471]: time="2025-03-17T17:43:03.792487025Z" level=info msg="StopPodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" returns successfully" Mar 17 17:43:03.792851 containerd[1471]: time="2025-03-17T17:43:03.792805067Z" level=info msg="RemovePodSandbox for \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:43:03.792851 containerd[1471]: time="2025-03-17T17:43:03.792826418Z" level=info msg="Forcibly stopping sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\"" Mar 17 17:43:03.792953 containerd[1471]: time="2025-03-17T17:43:03.792919017Z" level=info msg="TearDown network for sandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" successfully" Mar 17 17:43:03.831172 containerd[1471]: time="2025-03-17T17:43:03.831116318Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.831360 containerd[1471]: time="2025-03-17T17:43:03.831190792Z" level=info msg="RemovePodSandbox \"537027ffa1bfbd16c68902c78119d95aa49538ff5449faa654586af4707e9042\" returns successfully" Mar 17 17:43:03.831701 containerd[1471]: time="2025-03-17T17:43:03.831674603Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" Mar 17 17:43:03.831818 containerd[1471]: time="2025-03-17T17:43:03.831797149Z" level=info msg="TearDown network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" successfully" Mar 17 17:43:03.831818 containerd[1471]: time="2025-03-17T17:43:03.831811868Z" level=info msg="StopPodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" returns successfully" Mar 17 17:43:03.832095 containerd[1471]: time="2025-03-17T17:43:03.832051399Z" level=info msg="RemovePodSandbox for \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" Mar 17 17:43:03.832095 containerd[1471]: time="2025-03-17T17:43:03.832087689Z" level=info msg="Forcibly stopping sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\"" Mar 17 17:43:03.832190 containerd[1471]: time="2025-03-17T17:43:03.832155819Z" level=info msg="TearDown network for sandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" successfully" Mar 17 17:43:03.901121 containerd[1471]: time="2025-03-17T17:43:03.900926094Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.901121 containerd[1471]: time="2025-03-17T17:43:03.901007040Z" level=info msg="RemovePodSandbox \"f595336b40a5a0477c6557d8c88617834e5c5758a2435ab0f027d22a0d8aac32\" returns successfully" Mar 17 17:43:03.901874 containerd[1471]: time="2025-03-17T17:43:03.901635190Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\"" Mar 17 17:43:03.901874 containerd[1471]: time="2025-03-17T17:43:03.901781441Z" level=info msg="TearDown network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" successfully" Mar 17 17:43:03.901874 containerd[1471]: time="2025-03-17T17:43:03.901795939Z" level=info msg="StopPodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" returns successfully" Mar 17 17:43:03.902120 containerd[1471]: time="2025-03-17T17:43:03.902091668Z" level=info msg="RemovePodSandbox for \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\"" Mar 17 17:43:03.902160 containerd[1471]: time="2025-03-17T17:43:03.902125514Z" level=info msg="Forcibly stopping sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\"" Mar 17 17:43:03.902254 containerd[1471]: time="2025-03-17T17:43:03.902207682Z" level=info msg="TearDown network for sandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" successfully" Mar 17 17:43:03.951373 containerd[1471]: time="2025-03-17T17:43:03.951305411Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:03.951373 containerd[1471]: time="2025-03-17T17:43:03.951377389Z" level=info msg="RemovePodSandbox \"0892a07a3e06be0e814b5c58351e353b0bb71c9a1f01243516ce3aef745decf8\" returns successfully" Mar 17 17:43:03.951921 containerd[1471]: time="2025-03-17T17:43:03.951801746Z" level=info msg="StopPodSandbox for \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\"" Mar 17 17:43:03.951975 containerd[1471]: time="2025-03-17T17:43:03.951919733Z" level=info msg="TearDown network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" successfully" Mar 17 17:43:03.951975 containerd[1471]: time="2025-03-17T17:43:03.951936505Z" level=info msg="StopPodSandbox for \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" returns successfully" Mar 17 17:43:03.953119 containerd[1471]: time="2025-03-17T17:43:03.952227936Z" level=info msg="RemovePodSandbox for \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\"" Mar 17 17:43:03.953119 containerd[1471]: time="2025-03-17T17:43:03.952255840Z" level=info msg="Forcibly stopping sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\"" Mar 17 17:43:03.953119 containerd[1471]: time="2025-03-17T17:43:03.952325835Z" level=info msg="TearDown network for sandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" successfully" Mar 17 17:43:04.031643 containerd[1471]: time="2025-03-17T17:43:04.031582468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.031803 containerd[1471]: time="2025-03-17T17:43:04.031657101Z" level=info msg="RemovePodSandbox \"ec0e48b05e0a53a99114c24c83de494a8807ddffa32d2d9512e782c6d1476ec0\" returns successfully" Mar 17 17:43:04.032123 containerd[1471]: time="2025-03-17T17:43:04.032090435Z" level=info msg="StopPodSandbox for \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\"" Mar 17 17:43:04.032261 containerd[1471]: time="2025-03-17T17:43:04.032235764Z" level=info msg="TearDown network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\" successfully" Mar 17 17:43:04.032261 containerd[1471]: time="2025-03-17T17:43:04.032253469Z" level=info msg="StopPodSandbox for \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\" returns successfully" Mar 17 17:43:04.034484 containerd[1471]: time="2025-03-17T17:43:04.032513930Z" level=info msg="RemovePodSandbox for \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\"" Mar 17 17:43:04.034484 containerd[1471]: time="2025-03-17T17:43:04.032535211Z" level=info msg="Forcibly stopping sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\"" Mar 17 17:43:04.034484 containerd[1471]: time="2025-03-17T17:43:04.032604405Z" level=info msg="TearDown network for sandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\" successfully" Mar 17 17:43:04.162966 containerd[1471]: time="2025-03-17T17:43:04.162824689Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.162966 containerd[1471]: time="2025-03-17T17:43:04.162923088Z" level=info msg="RemovePodSandbox \"23c98b6772d0ae34e982558c97a3ab869ea3f90bd2d257af1efddf9ab2c09d53\" returns successfully" Mar 17 17:43:04.163888 containerd[1471]: time="2025-03-17T17:43:04.163867446Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:43:04.163985 containerd[1471]: time="2025-03-17T17:43:04.163968139Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:43:04.163985 containerd[1471]: time="2025-03-17T17:43:04.163982396Z" level=info msg="StopPodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:43:04.164267 containerd[1471]: time="2025-03-17T17:43:04.164246675Z" level=info msg="RemovePodSandbox for \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:43:04.164345 containerd[1471]: time="2025-03-17T17:43:04.164268617Z" level=info msg="Forcibly stopping sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\"" Mar 17 17:43:04.164421 containerd[1471]: time="2025-03-17T17:43:04.164402144Z" level=info msg="TearDown network for sandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" successfully" Mar 17 17:43:04.210887 containerd[1471]: time="2025-03-17T17:43:04.210818651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.211090 containerd[1471]: time="2025-03-17T17:43:04.210947929Z" level=info msg="RemovePodSandbox \"029c4006ecd1c4f4a1ac53934cfb4e4722c8c52eefcf8cff976bcd131fe5d151\" returns successfully" Mar 17 17:43:04.211462 containerd[1471]: time="2025-03-17T17:43:04.211431329Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:43:04.211605 containerd[1471]: time="2025-03-17T17:43:04.211575477Z" level=info msg="TearDown network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" successfully" Mar 17 17:43:04.211605 containerd[1471]: time="2025-03-17T17:43:04.211593231Z" level=info msg="StopPodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" returns successfully" Mar 17 17:43:04.211819 containerd[1471]: time="2025-03-17T17:43:04.211783958Z" level=info msg="RemovePodSandbox for \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:43:04.211819 containerd[1471]: time="2025-03-17T17:43:04.211808244Z" level=info msg="Forcibly stopping sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\"" Mar 17 17:43:04.211934 containerd[1471]: time="2025-03-17T17:43:04.211885353Z" level=info msg="TearDown network for sandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" successfully" Mar 17 17:43:04.252836 containerd[1471]: time="2025-03-17T17:43:04.252788300Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.252938 containerd[1471]: time="2025-03-17T17:43:04.252878985Z" level=info msg="RemovePodSandbox \"752881363b6f73c704f3c8518dab695dfd31be7b9b79cdde65eb0c4d95df3791\" returns successfully" Mar 17 17:43:04.253264 containerd[1471]: time="2025-03-17T17:43:04.253241301Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" Mar 17 17:43:04.253364 containerd[1471]: time="2025-03-17T17:43:04.253345773Z" level=info msg="TearDown network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" successfully" Mar 17 17:43:04.253364 containerd[1471]: time="2025-03-17T17:43:04.253358477Z" level=info msg="StopPodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" returns successfully" Mar 17 17:43:04.253594 containerd[1471]: time="2025-03-17T17:43:04.253555617Z" level=info msg="RemovePodSandbox for \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" Mar 17 17:43:04.253594 containerd[1471]: time="2025-03-17T17:43:04.253575495Z" level=info msg="Forcibly stopping sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\"" Mar 17 17:43:04.253695 containerd[1471]: time="2025-03-17T17:43:04.253637344Z" level=info msg="TearDown network for sandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" successfully" Mar 17 17:43:04.337254 containerd[1471]: time="2025-03-17T17:43:04.337185991Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.337254 containerd[1471]: time="2025-03-17T17:43:04.337261927Z" level=info msg="RemovePodSandbox \"affd646c561af90bafd7e4b900a961dd15025345077cff03d0e35877d76fd6df\" returns successfully" Mar 17 17:43:04.337685 containerd[1471]: time="2025-03-17T17:43:04.337662879Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\"" Mar 17 17:43:04.337822 containerd[1471]: time="2025-03-17T17:43:04.337787879Z" level=info msg="TearDown network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" successfully" Mar 17 17:43:04.337822 containerd[1471]: time="2025-03-17T17:43:04.337807687Z" level=info msg="StopPodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" returns successfully" Mar 17 17:43:04.338103 containerd[1471]: time="2025-03-17T17:43:04.338053110Z" level=info msg="RemovePodSandbox for \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\"" Mar 17 17:43:04.338103 containerd[1471]: time="2025-03-17T17:43:04.338094249Z" level=info msg="Forcibly stopping sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\"" Mar 17 17:43:04.338225 containerd[1471]: time="2025-03-17T17:43:04.338177389Z" level=info msg="TearDown network for sandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" successfully" Mar 17 17:43:04.386319 containerd[1471]: time="2025-03-17T17:43:04.386259240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.386420 containerd[1471]: time="2025-03-17T17:43:04.386326388Z" level=info msg="RemovePodSandbox \"79fb2b1bbc5e9621a87a0573fbaa0096c6eb6ca03a40900b2e0431acc0355602\" returns successfully" Mar 17 17:43:04.386687 containerd[1471]: time="2025-03-17T17:43:04.386657486Z" level=info msg="StopPodSandbox for \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\"" Mar 17 17:43:04.386794 containerd[1471]: time="2025-03-17T17:43:04.386768469Z" level=info msg="TearDown network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" successfully" Mar 17 17:43:04.386794 containerd[1471]: time="2025-03-17T17:43:04.386787175Z" level=info msg="StopPodSandbox for \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" returns successfully" Mar 17 17:43:04.387041 containerd[1471]: time="2025-03-17T17:43:04.387012829Z" level=info msg="RemovePodSandbox for \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\"" Mar 17 17:43:04.387041 containerd[1471]: time="2025-03-17T17:43:04.387043509Z" level=info msg="Forcibly stopping sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\"" Mar 17 17:43:04.387169 containerd[1471]: time="2025-03-17T17:43:04.387131478Z" level=info msg="TearDown network for sandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" successfully" Mar 17 17:43:04.411845 containerd[1471]: time="2025-03-17T17:43:04.411789056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.412024 containerd[1471]: time="2025-03-17T17:43:04.411869090Z" level=info msg="RemovePodSandbox \"fa980406b9a3398d166612d25afd48da2b1b358072669035c994ca219f3b09f8\" returns successfully" Mar 17 17:43:04.412366 containerd[1471]: time="2025-03-17T17:43:04.412331198Z" level=info msg="StopPodSandbox for \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\"" Mar 17 17:43:04.412488 containerd[1471]: time="2025-03-17T17:43:04.412456881Z" level=info msg="TearDown network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\" successfully" Mar 17 17:43:04.412488 containerd[1471]: time="2025-03-17T17:43:04.412477400Z" level=info msg="StopPodSandbox for \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\" returns successfully" Mar 17 17:43:04.412710 containerd[1471]: time="2025-03-17T17:43:04.412687063Z" level=info msg="RemovePodSandbox for \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\"" Mar 17 17:43:04.412710 containerd[1471]: time="2025-03-17T17:43:04.412706591Z" level=info msg="Forcibly stopping sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\"" Mar 17 17:43:04.412811 containerd[1471]: time="2025-03-17T17:43:04.412774050Z" level=info msg="TearDown network for sandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\" successfully" Mar 17 17:43:04.488722 containerd[1471]: time="2025-03-17T17:43:04.488651016Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.488722 containerd[1471]: time="2025-03-17T17:43:04.488727764Z" level=info msg="RemovePodSandbox \"438162a23ae7fabe3d9876ae44fd5ed596565f51c7f2e40c1edb3c0697ff6eeb\" returns successfully" Mar 17 17:43:04.489277 containerd[1471]: time="2025-03-17T17:43:04.489247033Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:43:04.489430 containerd[1471]: time="2025-03-17T17:43:04.489371603Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:43:04.489430 containerd[1471]: time="2025-03-17T17:43:04.489420827Z" level=info msg="StopPodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:43:04.489642 containerd[1471]: time="2025-03-17T17:43:04.489617005Z" level=info msg="RemovePodSandbox for \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:43:04.489690 containerd[1471]: time="2025-03-17T17:43:04.489642253Z" level=info msg="Forcibly stopping sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\"" Mar 17 17:43:04.489761 containerd[1471]: time="2025-03-17T17:43:04.489722447Z" level=info msg="TearDown network for sandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" successfully" Mar 17 17:43:04.786738 containerd[1471]: time="2025-03-17T17:43:04.786608126Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.786738 containerd[1471]: time="2025-03-17T17:43:04.786672520Z" level=info msg="RemovePodSandbox \"9b304f912bb3f9e71af30d53d5c7f3e243f23d0daf308c489c9401f6c28de7dd\" returns successfully" Mar 17 17:43:04.787109 containerd[1471]: time="2025-03-17T17:43:04.787054144Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:43:04.787313 containerd[1471]: time="2025-03-17T17:43:04.787278185Z" level=info msg="TearDown network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" successfully" Mar 17 17:43:04.787313 containerd[1471]: time="2025-03-17T17:43:04.787296811Z" level=info msg="StopPodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" returns successfully" Mar 17 17:43:04.787561 containerd[1471]: time="2025-03-17T17:43:04.787529088Z" level=info msg="RemovePodSandbox for \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:43:04.787561 containerd[1471]: time="2025-03-17T17:43:04.787560759Z" level=info msg="Forcibly stopping sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\"" Mar 17 17:43:04.787680 containerd[1471]: time="2025-03-17T17:43:04.787638709Z" level=info msg="TearDown network for sandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" successfully" Mar 17 17:43:04.829891 containerd[1471]: time="2025-03-17T17:43:04.829848901Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.829944 containerd[1471]: time="2025-03-17T17:43:04.829897425Z" level=info msg="RemovePodSandbox \"3200190c87309f125fb6f4db01012c3f8c9172cd686e39de64273fa9716dbc2d\" returns successfully" Mar 17 17:43:04.830233 containerd[1471]: time="2025-03-17T17:43:04.830191100Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" Mar 17 17:43:04.830345 containerd[1471]: time="2025-03-17T17:43:04.830320870Z" level=info msg="TearDown network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" successfully" Mar 17 17:43:04.830345 containerd[1471]: time="2025-03-17T17:43:04.830339205Z" level=info msg="StopPodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" returns successfully" Mar 17 17:43:04.830600 containerd[1471]: time="2025-03-17T17:43:04.830553297Z" level=info msg="RemovePodSandbox for \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" Mar 17 17:43:04.830600 containerd[1471]: time="2025-03-17T17:43:04.830577152Z" level=info msg="Forcibly stopping sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\"" Mar 17 17:43:04.830714 containerd[1471]: time="2025-03-17T17:43:04.830675672Z" level=info msg="TearDown network for sandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" successfully" Mar 17 17:43:04.852377 containerd[1471]: time="2025-03-17T17:43:04.852331814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.852377 containerd[1471]: time="2025-03-17T17:43:04.852376190Z" level=info msg="RemovePodSandbox \"743e6592f769f9d664d43beeff4bdd5bb0cb253eef3f6ca03ad374fbd8378ca7\" returns successfully" Mar 17 17:43:04.852613 containerd[1471]: time="2025-03-17T17:43:04.852580683Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\"" Mar 17 17:43:04.852684 containerd[1471]: time="2025-03-17T17:43:04.852665055Z" level=info msg="TearDown network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" successfully" Mar 17 17:43:04.852684 containerd[1471]: time="2025-03-17T17:43:04.852677940Z" level=info msg="StopPodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" returns successfully" Mar 17 17:43:04.852890 containerd[1471]: time="2025-03-17T17:43:04.852866663Z" level=info msg="RemovePodSandbox for \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\"" Mar 17 17:43:04.852890 containerd[1471]: time="2025-03-17T17:43:04.852884748Z" level=info msg="Forcibly stopping sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\"" Mar 17 17:43:04.852983 containerd[1471]: time="2025-03-17T17:43:04.852947729Z" level=info msg="TearDown network for sandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" successfully" Mar 17 17:43:04.901833 containerd[1471]: time="2025-03-17T17:43:04.901781277Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.901833 containerd[1471]: time="2025-03-17T17:43:04.901835610Z" level=info msg="RemovePodSandbox \"93863efd1f859db1a3509f2150ea53b31c09d80e6e1edb4b6f21bd6b060a1c8d\" returns successfully" Mar 17 17:43:04.902136 containerd[1471]: time="2025-03-17T17:43:04.902100059Z" level=info msg="StopPodSandbox for \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\"" Mar 17 17:43:04.902233 containerd[1471]: time="2025-03-17T17:43:04.902204732Z" level=info msg="TearDown network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" successfully" Mar 17 17:43:04.902233 containerd[1471]: time="2025-03-17T17:43:04.902223748Z" level=info msg="StopPodSandbox for \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" returns successfully" Mar 17 17:43:04.902451 containerd[1471]: time="2025-03-17T17:43:04.902423702Z" level=info msg="RemovePodSandbox for \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\"" Mar 17 17:43:04.902485 containerd[1471]: time="2025-03-17T17:43:04.902451356Z" level=info msg="Forcibly stopping sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\"" Mar 17 17:43:04.902571 containerd[1471]: time="2025-03-17T17:43:04.902529005Z" level=info msg="TearDown network for sandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" successfully" Mar 17 17:43:04.918080 containerd[1471]: time="2025-03-17T17:43:04.918012760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.918146 containerd[1471]: time="2025-03-17T17:43:04.918122691Z" level=info msg="RemovePodSandbox \"627d96227fcfb4bac341ed8b4c81f89ade5706af8209f346fc4831efe716bc42\" returns successfully" Mar 17 17:43:04.918591 containerd[1471]: time="2025-03-17T17:43:04.918547258Z" level=info msg="StopPodSandbox for \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\"" Mar 17 17:43:04.918745 containerd[1471]: time="2025-03-17T17:43:04.918653132Z" level=info msg="TearDown network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\" successfully" Mar 17 17:43:04.918745 containerd[1471]: time="2025-03-17T17:43:04.918662761Z" level=info msg="StopPodSandbox for \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\" returns successfully" Mar 17 17:43:04.918962 containerd[1471]: time="2025-03-17T17:43:04.918942518Z" level=info msg="RemovePodSandbox for \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\"" Mar 17 17:43:04.918962 containerd[1471]: time="2025-03-17T17:43:04.918961365Z" level=info msg="Forcibly stopping sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\"" Mar 17 17:43:04.919214 containerd[1471]: time="2025-03-17T17:43:04.919173814Z" level=info msg="TearDown network for sandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\" successfully" Mar 17 17:43:04.931633 containerd[1471]: time="2025-03-17T17:43:04.931601888Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.931633 containerd[1471]: time="2025-03-17T17:43:04.931633479Z" level=info msg="RemovePodSandbox \"5da8f5a8d97f5fab357ec1437d9519129cc8af2afc0299617c2e98c211099393\" returns successfully" Mar 17 17:43:04.931907 containerd[1471]: time="2025-03-17T17:43:04.931877850Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:43:04.931998 containerd[1471]: time="2025-03-17T17:43:04.931975727Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:43:04.931998 containerd[1471]: time="2025-03-17T17:43:04.931992179Z" level=info msg="StopPodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:43:04.932244 containerd[1471]: time="2025-03-17T17:43:04.932216471Z" level=info msg="RemovePodSandbox for \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:43:04.932244 containerd[1471]: time="2025-03-17T17:43:04.932236920Z" level=info msg="Forcibly stopping sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\"" Mar 17 17:43:04.932331 containerd[1471]: time="2025-03-17T17:43:04.932300462Z" level=info msg="TearDown network for sandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" successfully" Mar 17 17:43:04.941925 containerd[1471]: time="2025-03-17T17:43:04.941892159Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.941925 containerd[1471]: time="2025-03-17T17:43:04.941925143Z" level=info msg="RemovePodSandbox \"edb2903b482639be597b82a24a152d41c24f51e7acac61283260a1d0100a5d4c\" returns successfully" Mar 17 17:43:04.942286 containerd[1471]: time="2025-03-17T17:43:04.942245800Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:43:04.942397 containerd[1471]: time="2025-03-17T17:43:04.942375719Z" level=info msg="TearDown network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" successfully" Mar 17 17:43:04.942397 containerd[1471]: time="2025-03-17T17:43:04.942392051Z" level=info msg="StopPodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" returns successfully" Mar 17 17:43:04.942676 containerd[1471]: time="2025-03-17T17:43:04.942650869Z" level=info msg="RemovePodSandbox for \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:43:04.942676 containerd[1471]: time="2025-03-17T17:43:04.942673413Z" level=info msg="Forcibly stopping sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\"" Mar 17 17:43:04.942783 containerd[1471]: time="2025-03-17T17:43:04.942741473Z" level=info msg="TearDown network for sandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" successfully" Mar 17 17:43:04.954207 containerd[1471]: time="2025-03-17T17:43:04.954173150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.954207 containerd[1471]: time="2025-03-17T17:43:04.954211935Z" level=info msg="RemovePodSandbox \"a86796bc073539410fecfeb3dec78c8c6ed265998c9643aa1e3a054f1b65a9d5\" returns successfully" Mar 17 17:43:04.954557 containerd[1471]: time="2025-03-17T17:43:04.954481714Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" Mar 17 17:43:04.954603 containerd[1471]: time="2025-03-17T17:43:04.954579753Z" level=info msg="TearDown network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" successfully" Mar 17 17:43:04.954603 containerd[1471]: time="2025-03-17T17:43:04.954591726Z" level=info msg="StopPodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" returns successfully" Mar 17 17:43:04.954824 containerd[1471]: time="2025-03-17T17:43:04.954803964Z" level=info msg="RemovePodSandbox for \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" Mar 17 17:43:04.954875 containerd[1471]: time="2025-03-17T17:43:04.954824293Z" level=info msg="Forcibly stopping sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\"" Mar 17 17:43:04.954914 containerd[1471]: time="2025-03-17T17:43:04.954886914Z" level=info msg="TearDown network for sandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" successfully" Mar 17 17:43:04.965665 containerd[1471]: time="2025-03-17T17:43:04.965623193Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.965665 containerd[1471]: time="2025-03-17T17:43:04.965662870Z" level=info msg="RemovePodSandbox \"8288c0d6c6206ca6e3bb0341d82c9b6dfe98446714195c494e41c5765c36d080\" returns successfully" Mar 17 17:43:04.965941 containerd[1471]: time="2025-03-17T17:43:04.965912660Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\"" Mar 17 17:43:04.966055 containerd[1471]: time="2025-03-17T17:43:04.966016561Z" level=info msg="TearDown network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" successfully" Mar 17 17:43:04.966055 containerd[1471]: time="2025-03-17T17:43:04.966045256Z" level=info msg="StopPodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" returns successfully" Mar 17 17:43:04.966322 containerd[1471]: time="2025-03-17T17:43:04.966287562Z" level=info msg="RemovePodSandbox for \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\"" Mar 17 17:43:04.966322 containerd[1471]: time="2025-03-17T17:43:04.966309444Z" level=info msg="Forcibly stopping sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\"" Mar 17 17:43:04.966416 containerd[1471]: time="2025-03-17T17:43:04.966389368Z" level=info msg="TearDown network for sandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" successfully" Mar 17 17:43:04.978489 containerd[1471]: time="2025-03-17T17:43:04.978447721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.978583 containerd[1471]: time="2025-03-17T17:43:04.978498298Z" level=info msg="RemovePodSandbox \"40e32737773824eef901e2ee58674aeb2ee43a43656886504306789e3ed3a45b\" returns successfully" Mar 17 17:43:04.980197 containerd[1471]: time="2025-03-17T17:43:04.978831670Z" level=info msg="StopPodSandbox for \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\"" Mar 17 17:43:04.980197 containerd[1471]: time="2025-03-17T17:43:04.978915892Z" level=info msg="TearDown network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" successfully" Mar 17 17:43:04.980197 containerd[1471]: time="2025-03-17T17:43:04.978925269Z" level=info msg="StopPodSandbox for \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" returns successfully" Mar 17 17:43:04.980197 containerd[1471]: time="2025-03-17T17:43:04.979210769Z" level=info msg="RemovePodSandbox for \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\"" Mar 17 17:43:04.980197 containerd[1471]: time="2025-03-17T17:43:04.979234474Z" level=info msg="Forcibly stopping sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\"" Mar 17 17:43:04.980197 containerd[1471]: time="2025-03-17T17:43:04.979315320Z" level=info msg="TearDown network for sandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" successfully" Mar 17 17:43:04.986125 containerd[1471]: time="2025-03-17T17:43:04.986047936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.986196 containerd[1471]: time="2025-03-17T17:43:04.986134853Z" level=info msg="RemovePodSandbox \"cdfcb6e581a338e6cb320c08b196e4660303b9ea6abc9f2af443689a421c1b13\" returns successfully" Mar 17 17:43:04.986530 containerd[1471]: time="2025-03-17T17:43:04.986491840Z" level=info msg="StopPodSandbox for \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\"" Mar 17 17:43:04.986648 containerd[1471]: time="2025-03-17T17:43:04.986619926Z" level=info msg="TearDown network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\" successfully" Mar 17 17:43:04.986648 containerd[1471]: time="2025-03-17T17:43:04.986635186Z" level=info msg="StopPodSandbox for \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\" returns successfully" Mar 17 17:43:04.986922 containerd[1471]: time="2025-03-17T17:43:04.986897000Z" level=info msg="RemovePodSandbox for \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\"" Mar 17 17:43:04.986980 containerd[1471]: time="2025-03-17T17:43:04.986924673Z" level=info msg="Forcibly stopping sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\"" Mar 17 17:43:04.987076 containerd[1471]: time="2025-03-17T17:43:04.987016990Z" level=info msg="TearDown network for sandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\" successfully" Mar 17 17:43:04.994010 containerd[1471]: time="2025-03-17T17:43:04.993972735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:04.994120 containerd[1471]: time="2025-03-17T17:43:04.994025045Z" level=info msg="RemovePodSandbox \"b5653076485c36813df8770ad807dddde10fdccc2e47514c36a08e9e25cf99e2\" returns successfully" Mar 17 17:43:04.994407 containerd[1471]: time="2025-03-17T17:43:04.994368205Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:43:04.994496 containerd[1471]: time="2025-03-17T17:43:04.994451786Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:43:04.994496 containerd[1471]: time="2025-03-17T17:43:04.994462527Z" level=info msg="StopPodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:43:04.994785 containerd[1471]: time="2025-03-17T17:43:04.994748618Z" level=info msg="RemovePodSandbox for \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:43:04.994785 containerd[1471]: time="2025-03-17T17:43:04.994782582Z" level=info msg="Forcibly stopping sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\"" Mar 17 17:43:04.995401 containerd[1471]: time="2025-03-17T17:43:04.994878076Z" level=info msg="TearDown network for sandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" successfully" Mar 17 17:43:05.002463 containerd[1471]: time="2025-03-17T17:43:05.002428464Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:05.002521 containerd[1471]: time="2025-03-17T17:43:05.002508868Z" level=info msg="RemovePodSandbox \"c8fa0e68e50c704cec6afec4727d369644d559c68bcae3c557439825300e22db\" returns successfully" Mar 17 17:43:05.002802 containerd[1471]: time="2025-03-17T17:43:05.002774590Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:43:05.002891 containerd[1471]: time="2025-03-17T17:43:05.002860685Z" level=info msg="TearDown network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" successfully" Mar 17 17:43:05.002891 containerd[1471]: time="2025-03-17T17:43:05.002877237Z" level=info msg="StopPodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" returns successfully" Mar 17 17:43:05.003170 containerd[1471]: time="2025-03-17T17:43:05.003139672Z" level=info msg="RemovePodSandbox for \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:43:05.003170 containerd[1471]: time="2025-03-17T17:43:05.003166413Z" level=info msg="Forcibly stopping sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\"" Mar 17 17:43:05.003285 containerd[1471]: time="2025-03-17T17:43:05.003245275Z" level=info msg="TearDown network for sandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" successfully" Mar 17 17:43:05.008244 containerd[1471]: time="2025-03-17T17:43:05.008205524Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:05.008314 containerd[1471]: time="2025-03-17T17:43:05.008252796Z" level=info msg="RemovePodSandbox \"ee983380c36cc632ecdbc7f259407df2ede7bbdbeef83c02e8ebc5bf1487f4ae\" returns successfully" Mar 17 17:43:05.008569 containerd[1471]: time="2025-03-17T17:43:05.008515901Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" Mar 17 17:43:05.008630 containerd[1471]: time="2025-03-17T17:43:05.008601916Z" level=info msg="TearDown network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" successfully" Mar 17 17:43:05.008630 containerd[1471]: time="2025-03-17T17:43:05.008611996Z" level=info msg="StopPodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" returns successfully" Mar 17 17:43:05.010574 containerd[1471]: time="2025-03-17T17:43:05.008815007Z" level=info msg="RemovePodSandbox for \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" Mar 17 17:43:05.010574 containerd[1471]: time="2025-03-17T17:43:05.008835697Z" level=info msg="Forcibly stopping sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\"" Mar 17 17:43:05.010574 containerd[1471]: time="2025-03-17T17:43:05.008899289Z" level=info msg="TearDown network for sandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" successfully" Mar 17 17:43:05.014164 containerd[1471]: time="2025-03-17T17:43:05.014129497Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:05.014220 containerd[1471]: time="2025-03-17T17:43:05.014167250Z" level=info msg="RemovePodSandbox \"d80cb82167a8ad779a3df81a8f4f06d6e87ee54cc9d974d390962e2bbdc2554c\" returns successfully" Mar 17 17:43:05.014584 containerd[1471]: time="2025-03-17T17:43:05.014543924Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\"" Mar 17 17:43:05.014714 containerd[1471]: time="2025-03-17T17:43:05.014672753Z" level=info msg="TearDown network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" successfully" Mar 17 17:43:05.014714 containerd[1471]: time="2025-03-17T17:43:05.014686959Z" level=info msg="StopPodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" returns successfully" Mar 17 17:43:05.017255 containerd[1471]: time="2025-03-17T17:43:05.014984642Z" level=info msg="RemovePodSandbox for \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\"" Mar 17 17:43:05.017255 containerd[1471]: time="2025-03-17T17:43:05.015018016Z" level=info msg="Forcibly stopping sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\"" Mar 17 17:43:05.017255 containerd[1471]: time="2025-03-17T17:43:05.015149119Z" level=info msg="TearDown network for sandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" successfully" Mar 17 17:43:05.020963 containerd[1471]: time="2025-03-17T17:43:05.020915809Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:05.021019 containerd[1471]: time="2025-03-17T17:43:05.020967908Z" level=info msg="RemovePodSandbox \"223d3bb6fe2624ebed37cf69cc759ae6ebc66e042dd9c5d2a70ed1873cb21051\" returns successfully" Mar 17 17:43:05.021244 containerd[1471]: time="2025-03-17T17:43:05.021219163Z" level=info msg="StopPodSandbox for \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\"" Mar 17 17:43:05.021323 containerd[1471]: time="2025-03-17T17:43:05.021305408Z" level=info msg="TearDown network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" successfully" Mar 17 17:43:05.021323 containerd[1471]: time="2025-03-17T17:43:05.021319034Z" level=info msg="StopPodSandbox for \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" returns successfully" Mar 17 17:43:05.021575 containerd[1471]: time="2025-03-17T17:43:05.021534829Z" level=info msg="RemovePodSandbox for \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\"" Mar 17 17:43:05.021575 containerd[1471]: time="2025-03-17T17:43:05.021561540Z" level=info msg="Forcibly stopping sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\"" Mar 17 17:43:05.021667 containerd[1471]: time="2025-03-17T17:43:05.021632427Z" level=info msg="TearDown network for sandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" successfully" Mar 17 17:43:05.027416 containerd[1471]: time="2025-03-17T17:43:05.027382365Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:05.027495 containerd[1471]: time="2025-03-17T17:43:05.027427752Z" level=info msg="RemovePodSandbox \"baddfa9dc598678bda0530ca14b75d182f13cd5800496d8e3988bcf6bfeb0533\" returns successfully" Mar 17 17:43:05.027720 containerd[1471]: time="2025-03-17T17:43:05.027670018Z" level=info msg="StopPodSandbox for \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\"" Mar 17 17:43:05.027812 containerd[1471]: time="2025-03-17T17:43:05.027768969Z" level=info msg="TearDown network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\" successfully" Mar 17 17:43:05.027812 containerd[1471]: time="2025-03-17T17:43:05.027781944Z" level=info msg="StopPodSandbox for \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\" returns successfully" Mar 17 17:43:05.028127 containerd[1471]: time="2025-03-17T17:43:05.028097892Z" level=info msg="RemovePodSandbox for \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\"" Mar 17 17:43:05.028127 containerd[1471]: time="2025-03-17T17:43:05.028127548Z" level=info msg="Forcibly stopping sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\"" Mar 17 17:43:05.028247 containerd[1471]: time="2025-03-17T17:43:05.028212722Z" level=info msg="TearDown network for sandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\" successfully" Mar 17 17:43:05.034763 containerd[1471]: time="2025-03-17T17:43:05.034732841Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:43:05.034832 containerd[1471]: time="2025-03-17T17:43:05.034770204Z" level=info msg="RemovePodSandbox \"9b28577579cbd2951afd5be0da376c580cb7372b20e0489868dea6e0d6972956\" returns successfully" Mar 17 17:43:06.075117 systemd[1]: Started sshd@16-10.0.0.46:22-10.0.0.1:49040.service - OpenSSH per-connection server daemon (10.0.0.1:49040). Mar 17 17:43:06.137098 sshd[6001]: Accepted publickey for core from 10.0.0.1 port 49040 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:06.139009 sshd-session[6001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:06.143299 systemd-logind[1456]: New session 16 of user core. Mar 17 17:43:06.152457 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:43:06.278671 sshd[6003]: Connection closed by 10.0.0.1 port 49040 Mar 17 17:43:06.279136 sshd-session[6001]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:06.283945 systemd[1]: sshd@16-10.0.0.46:22-10.0.0.1:49040.service: Deactivated successfully. Mar 17 17:43:06.286206 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:43:06.286960 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:43:06.287933 systemd-logind[1456]: Removed session 16. Mar 17 17:43:11.292035 systemd[1]: Started sshd@17-10.0.0.46:22-10.0.0.1:49046.service - OpenSSH per-connection server daemon (10.0.0.1:49046). Mar 17 17:43:11.350713 sshd[6017]: Accepted publickey for core from 10.0.0.1 port 49046 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:11.352296 sshd-session[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:11.357108 systemd-logind[1456]: New session 17 of user core. Mar 17 17:43:11.369400 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:43:11.494832 sshd[6019]: Connection closed by 10.0.0.1 port 49046 Mar 17 17:43:11.495295 sshd-session[6017]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:11.499374 systemd[1]: sshd@17-10.0.0.46:22-10.0.0.1:49046.service: Deactivated successfully. Mar 17 17:43:11.501885 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:43:11.502802 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:43:11.503669 systemd-logind[1456]: Removed session 17. Mar 17 17:43:14.859239 kubelet[2574]: I0317 17:43:14.859174 2574 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:43:14.934972 kubelet[2574]: E0317 17:43:14.934933 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:16.511634 systemd[1]: Started sshd@18-10.0.0.46:22-10.0.0.1:47084.service - OpenSSH per-connection server daemon (10.0.0.1:47084). Mar 17 17:43:16.575966 sshd[6036]: Accepted publickey for core from 10.0.0.1 port 47084 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:16.579340 sshd-session[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:16.583495 systemd-logind[1456]: New session 18 of user core. Mar 17 17:43:16.593298 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:43:16.739968 sshd[6038]: Connection closed by 10.0.0.1 port 47084 Mar 17 17:43:16.740477 sshd-session[6036]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:16.751312 systemd[1]: sshd@18-10.0.0.46:22-10.0.0.1:47084.service: Deactivated successfully. Mar 17 17:43:16.753998 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:43:16.756036 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:43:16.766817 systemd[1]: Started sshd@19-10.0.0.46:22-10.0.0.1:47100.service - OpenSSH per-connection server daemon (10.0.0.1:47100). Mar 17 17:43:16.768034 systemd-logind[1456]: Removed session 18. Mar 17 17:43:16.810893 sshd[6053]: Accepted publickey for core from 10.0.0.1 port 47100 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:16.812845 sshd-session[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:16.817589 systemd-logind[1456]: New session 19 of user core. Mar 17 17:43:16.826281 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:43:17.354034 sshd[6055]: Connection closed by 10.0.0.1 port 47100 Mar 17 17:43:17.354909 sshd-session[6053]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:17.364821 systemd[1]: sshd@19-10.0.0.46:22-10.0.0.1:47100.service: Deactivated successfully. Mar 17 17:43:17.367279 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:43:17.369098 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:43:17.377597 systemd[1]: Started sshd@20-10.0.0.46:22-10.0.0.1:47114.service - OpenSSH per-connection server daemon (10.0.0.1:47114). Mar 17 17:43:17.379002 systemd-logind[1456]: Removed session 19. Mar 17 17:43:17.423306 sshd[6065]: Accepted publickey for core from 10.0.0.1 port 47114 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:17.425375 sshd-session[6065]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:17.430489 systemd-logind[1456]: New session 20 of user core. Mar 17 17:43:17.437279 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:43:19.489218 sshd[6067]: Connection closed by 10.0.0.1 port 47114 Mar 17 17:43:19.491607 sshd-session[6065]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:19.502337 systemd[1]: sshd@20-10.0.0.46:22-10.0.0.1:47114.service: Deactivated successfully. Mar 17 17:43:19.506586 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:43:19.508606 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:43:19.515415 systemd[1]: Started sshd@21-10.0.0.46:22-10.0.0.1:47124.service - OpenSSH per-connection server daemon (10.0.0.1:47124). Mar 17 17:43:19.516398 systemd-logind[1456]: Removed session 20. Mar 17 17:43:19.564178 sshd[6104]: Accepted publickey for core from 10.0.0.1 port 47124 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:19.565965 sshd-session[6104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:19.570621 systemd-logind[1456]: New session 21 of user core. Mar 17 17:43:19.578260 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:43:19.928320 sshd[6106]: Connection closed by 10.0.0.1 port 47124 Mar 17 17:43:19.930193 sshd-session[6104]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:19.939156 systemd[1]: sshd@21-10.0.0.46:22-10.0.0.1:47124.service: Deactivated successfully. Mar 17 17:43:19.941753 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:43:19.944469 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:43:19.954390 systemd[1]: Started sshd@22-10.0.0.46:22-10.0.0.1:47134.service - OpenSSH per-connection server daemon (10.0.0.1:47134). Mar 17 17:43:19.955170 systemd-logind[1456]: Removed session 21. Mar 17 17:43:19.993258 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 47134 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:19.995235 sshd-session[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:20.002650 systemd-logind[1456]: New session 22 of user core. Mar 17 17:43:20.010258 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:43:20.138632 sshd[6119]: Connection closed by 10.0.0.1 port 47134 Mar 17 17:43:20.139083 sshd-session[6117]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:20.143087 systemd[1]: sshd@22-10.0.0.46:22-10.0.0.1:47134.service: Deactivated successfully. Mar 17 17:43:20.145514 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:43:20.147363 systemd-logind[1456]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:43:20.148515 systemd-logind[1456]: Removed session 22. Mar 17 17:43:21.935265 kubelet[2574]: E0317 17:43:21.935202 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:23.194502 systemd[1]: run-containerd-runc-k8s.io-66ad02f88b09fe6244f085ac8aa08436e7555d52e7a6b7a182f6d919f0b64042-runc.IFPAPY.mount: Deactivated successfully. Mar 17 17:43:25.159434 systemd[1]: Started sshd@23-10.0.0.46:22-10.0.0.1:47142.service - OpenSSH per-connection server daemon (10.0.0.1:47142). Mar 17 17:43:25.202602 sshd[6179]: Accepted publickey for core from 10.0.0.1 port 47142 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:25.204723 sshd-session[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:25.209301 systemd-logind[1456]: New session 23 of user core. Mar 17 17:43:25.222304 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:43:25.346257 sshd[6181]: Connection closed by 10.0.0.1 port 47142 Mar 17 17:43:25.346670 sshd-session[6179]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:25.350901 systemd[1]: sshd@23-10.0.0.46:22-10.0.0.1:47142.service: Deactivated successfully. Mar 17 17:43:25.353007 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:43:25.353753 systemd-logind[1456]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:43:25.354711 systemd-logind[1456]: Removed session 23. Mar 17 17:43:30.358492 systemd[1]: Started sshd@24-10.0.0.46:22-10.0.0.1:50256.service - OpenSSH per-connection server daemon (10.0.0.1:50256). Mar 17 17:43:30.401313 sshd[6197]: Accepted publickey for core from 10.0.0.1 port 50256 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:30.403243 sshd-session[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:30.407539 systemd-logind[1456]: New session 24 of user core. Mar 17 17:43:30.419254 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:43:30.527664 sshd[6199]: Connection closed by 10.0.0.1 port 50256 Mar 17 17:43:30.528046 sshd-session[6197]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:30.531877 systemd[1]: sshd@24-10.0.0.46:22-10.0.0.1:50256.service: Deactivated successfully. Mar 17 17:43:30.534061 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:43:30.534805 systemd-logind[1456]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:43:30.535777 systemd-logind[1456]: Removed session 24. Mar 17 17:43:35.543103 systemd[1]: Started sshd@25-10.0.0.46:22-10.0.0.1:50258.service - OpenSSH per-connection server daemon (10.0.0.1:50258). Mar 17 17:43:35.584603 sshd[6211]: Accepted publickey for core from 10.0.0.1 port 50258 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:35.586173 sshd-session[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:35.590060 systemd-logind[1456]: New session 25 of user core. Mar 17 17:43:35.595347 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 17 17:43:35.706211 sshd[6213]: Connection closed by 10.0.0.1 port 50258 Mar 17 17:43:35.706604 sshd-session[6211]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:35.711110 systemd[1]: sshd@25-10.0.0.46:22-10.0.0.1:50258.service: Deactivated successfully. Mar 17 17:43:35.713407 systemd[1]: session-25.scope: Deactivated successfully. Mar 17 17:43:35.714051 systemd-logind[1456]: Session 25 logged out. Waiting for processes to exit. Mar 17 17:43:35.715013 systemd-logind[1456]: Removed session 25. Mar 17 17:43:36.934672 kubelet[2574]: E0317 17:43:36.934620 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:39.935315 kubelet[2574]: E0317 17:43:39.935258 2574 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:43:40.718545 systemd[1]: Started sshd@26-10.0.0.46:22-10.0.0.1:34994.service - OpenSSH per-connection server daemon (10.0.0.1:34994). Mar 17 17:43:40.761337 sshd[6227]: Accepted publickey for core from 10.0.0.1 port 34994 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:43:40.763129 sshd-session[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:43:40.767653 systemd-logind[1456]: New session 26 of user core. Mar 17 17:43:40.774254 systemd[1]: Started session-26.scope - Session 26 of User core. Mar 17 17:43:40.883904 sshd[6229]: Connection closed by 10.0.0.1 port 34994 Mar 17 17:43:40.884333 sshd-session[6227]: pam_unix(sshd:session): session closed for user core Mar 17 17:43:40.888972 systemd[1]: sshd@26-10.0.0.46:22-10.0.0.1:34994.service: Deactivated successfully. Mar 17 17:43:40.892034 systemd[1]: session-26.scope: Deactivated successfully. Mar 17 17:43:40.892875 systemd-logind[1456]: Session 26 logged out. Waiting for processes to exit. Mar 17 17:43:40.893937 systemd-logind[1456]: Removed session 26.