Mar 17 17:51:57.215743 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:51:57.215764 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:51:57.215775 kernel: BIOS-provided physical RAM map: Mar 17 17:51:57.215781 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:51:57.215787 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:51:57.215793 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:51:57.215800 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Mar 17 17:51:57.215806 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Mar 17 17:51:57.215812 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 17 17:51:57.215820 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 17 17:51:57.215826 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:51:57.215832 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:51:57.215838 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 17 17:51:57.215844 kernel: NX (Execute Disable) protection: active Mar 17 17:51:57.215851 kernel: APIC: Static calls initialized Mar 17 17:51:57.215865 kernel: SMBIOS 2.8 present. Mar 17 17:51:57.215872 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Mar 17 17:51:57.215878 kernel: Hypervisor detected: KVM Mar 17 17:51:57.215885 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:51:57.215891 kernel: kvm-clock: using sched offset of 2958466696 cycles Mar 17 17:51:57.215898 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:51:57.215905 kernel: tsc: Detected 2794.750 MHz processor Mar 17 17:51:57.215912 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:51:57.215919 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:51:57.215926 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Mar 17 17:51:57.215935 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:51:57.215942 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:51:57.215949 kernel: Using GB pages for direct mapping Mar 17 17:51:57.215955 kernel: ACPI: Early table checksum verification disabled Mar 17 17:51:57.215962 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Mar 17 17:51:57.215969 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:57.215975 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:57.215982 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:57.215991 kernel: ACPI: FACS 0x000000009CFE0000 000040 Mar 17 17:51:57.215997 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:57.216004 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:57.216010 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:57.216017 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:51:57.216023 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Mar 17 17:51:57.216030 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Mar 17 17:51:57.216040 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Mar 17 17:51:57.216059 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Mar 17 17:51:57.216067 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Mar 17 17:51:57.216074 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Mar 17 17:51:57.216081 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Mar 17 17:51:57.216088 kernel: No NUMA configuration found Mar 17 17:51:57.216095 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Mar 17 17:51:57.216109 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Mar 17 17:51:57.216127 kernel: Zone ranges: Mar 17 17:51:57.216136 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:51:57.216152 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Mar 17 17:51:57.216168 kernel: Normal empty Mar 17 17:51:57.216183 kernel: Movable zone start for each node Mar 17 17:51:57.216196 kernel: Early memory node ranges Mar 17 17:51:57.216204 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:51:57.216218 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Mar 17 17:51:57.216239 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Mar 17 17:51:57.216257 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:51:57.216264 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:51:57.216271 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Mar 17 17:51:57.216292 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:51:57.216299 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:51:57.216306 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:51:57.216313 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:51:57.216320 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:51:57.216327 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:51:57.216336 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:51:57.216343 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:51:57.216350 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:51:57.216356 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:51:57.216363 kernel: TSC deadline timer available Mar 17 17:51:57.216370 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 17 17:51:57.216377 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:51:57.216384 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 17 17:51:57.216393 kernel: kvm-guest: setup PV sched yield Mar 17 17:51:57.216419 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 17 17:51:57.216427 kernel: Booting paravirtualized kernel on KVM Mar 17 17:51:57.216434 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:51:57.216441 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 17 17:51:57.216448 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 17 17:51:57.216455 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 17 17:51:57.216461 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 17 17:51:57.216468 kernel: kvm-guest: PV spinlocks enabled Mar 17 17:51:57.216475 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 17 17:51:57.216486 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:51:57.216493 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:51:57.216500 kernel: random: crng init done Mar 17 17:51:57.216507 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 17:51:57.216514 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:51:57.216520 kernel: Fallback order for Node 0: 0 Mar 17 17:51:57.216527 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Mar 17 17:51:57.216534 kernel: Policy zone: DMA32 Mar 17 17:51:57.216541 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:51:57.216550 kernel: Memory: 2434592K/2571752K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 136900K reserved, 0K cma-reserved) Mar 17 17:51:57.216565 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 17:51:57.216571 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:51:57.216578 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:51:57.216585 kernel: Dynamic Preempt: voluntary Mar 17 17:51:57.216592 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:51:57.216604 kernel: rcu: RCU event tracing is enabled. Mar 17 17:51:57.216611 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 17:51:57.216618 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:51:57.216628 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:51:57.216634 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:51:57.216641 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:51:57.216650 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 17:51:57.216665 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 17 17:51:57.216677 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:51:57.216686 kernel: Console: colour VGA+ 80x25 Mar 17 17:51:57.216696 kernel: printk: console [ttyS0] enabled Mar 17 17:51:57.216705 kernel: ACPI: Core revision 20230628 Mar 17 17:51:57.216716 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:51:57.216723 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:51:57.216730 kernel: x2apic enabled Mar 17 17:51:57.216737 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:51:57.216744 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 17 17:51:57.216751 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 17 17:51:57.216758 kernel: kvm-guest: setup PV IPIs Mar 17 17:51:57.216773 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:51:57.216780 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 17 17:51:57.216788 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Mar 17 17:51:57.216795 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 17 17:51:57.216802 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 17 17:51:57.216811 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 17 17:51:57.216818 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:51:57.216825 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:51:57.216832 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:51:57.216842 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:51:57.216849 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 17 17:51:57.216860 kernel: RETBleed: Mitigation: untrained return thunk Mar 17 17:51:57.216868 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:51:57.216875 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:51:57.216882 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 17 17:51:57.216890 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 17 17:51:57.216897 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 17 17:51:57.216904 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:51:57.216914 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:51:57.216921 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:51:57.216928 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:51:57.216935 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 17 17:51:57.216942 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:51:57.216949 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:51:57.216956 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:51:57.216963 kernel: landlock: Up and running. Mar 17 17:51:57.216970 kernel: SELinux: Initializing. Mar 17 17:51:57.216980 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:51:57.216987 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 17:51:57.216994 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 17 17:51:57.217001 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:51:57.217009 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:51:57.217016 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 17 17:51:57.217023 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 17 17:51:57.217030 kernel: ... version: 0 Mar 17 17:51:57.217039 kernel: ... bit width: 48 Mar 17 17:51:57.217046 kernel: ... generic registers: 6 Mar 17 17:51:57.217054 kernel: ... value mask: 0000ffffffffffff Mar 17 17:51:57.217061 kernel: ... max period: 00007fffffffffff Mar 17 17:51:57.217068 kernel: ... fixed-purpose events: 0 Mar 17 17:51:57.217075 kernel: ... event mask: 000000000000003f Mar 17 17:51:57.217082 kernel: signal: max sigframe size: 1776 Mar 17 17:51:57.217089 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:51:57.217096 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:51:57.217103 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:51:57.217113 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:51:57.217120 kernel: .... node #0, CPUs: #1 #2 #3 Mar 17 17:51:57.217127 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 17:51:57.217134 kernel: smpboot: Max logical packages: 1 Mar 17 17:51:57.217141 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Mar 17 17:51:57.217148 kernel: devtmpfs: initialized Mar 17 17:51:57.217155 kernel: x86/mm: Memory block size: 128MB Mar 17 17:51:57.217162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:51:57.217169 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 17:51:57.217179 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:51:57.217186 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:51:57.217193 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:51:57.217200 kernel: audit: type=2000 audit(1742233915.585:1): state=initialized audit_enabled=0 res=1 Mar 17 17:51:57.217207 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:51:57.217214 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:51:57.217221 kernel: cpuidle: using governor menu Mar 17 17:51:57.217228 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:51:57.217235 kernel: dca service started, version 1.12.1 Mar 17 17:51:57.217245 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 17 17:51:57.217252 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 17 17:51:57.217259 kernel: PCI: Using configuration type 1 for base access Mar 17 17:51:57.217266 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:51:57.217273 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 17:51:57.217280 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 17 17:51:57.217288 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:51:57.217295 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:51:57.217302 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:51:57.217311 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:51:57.217319 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:51:57.217326 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:51:57.217333 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:51:57.217340 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:51:57.217347 kernel: ACPI: Interpreter enabled Mar 17 17:51:57.217354 kernel: ACPI: PM: (supports S0 S3 S5) Mar 17 17:51:57.217361 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:51:57.217368 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:51:57.217378 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:51:57.217385 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 17 17:51:57.217392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:51:57.217615 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:51:57.217748 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 17 17:51:57.217868 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 17 17:51:57.217878 kernel: PCI host bridge to bus 0000:00 Mar 17 17:51:57.218013 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:51:57.218123 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:51:57.218231 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:51:57.218338 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Mar 17 17:51:57.218460 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 17 17:51:57.218578 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Mar 17 17:51:57.218768 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:51:57.219000 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 17 17:51:57.219214 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 17 17:51:57.219339 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Mar 17 17:51:57.219479 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Mar 17 17:51:57.219608 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Mar 17 17:51:57.219727 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:51:57.219864 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 17:51:57.219997 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 17 17:51:57.220117 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Mar 17 17:51:57.220234 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Mar 17 17:51:57.220372 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:51:57.220545 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:51:57.220718 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Mar 17 17:51:57.220842 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Mar 17 17:51:57.220977 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:51:57.221098 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Mar 17 17:51:57.221216 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Mar 17 17:51:57.221342 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Mar 17 17:51:57.221478 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Mar 17 17:51:57.221648 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 17 17:51:57.221803 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 17 17:51:57.221939 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 17 17:51:57.222060 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Mar 17 17:51:57.222198 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Mar 17 17:51:57.222343 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 17 17:51:57.222532 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 17 17:51:57.222545 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:51:57.222567 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:51:57.222574 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:51:57.222581 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:51:57.222588 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 17 17:51:57.222596 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 17 17:51:57.222603 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 17 17:51:57.222610 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 17 17:51:57.222618 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 17 17:51:57.222625 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 17 17:51:57.222635 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 17 17:51:57.222642 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 17 17:51:57.222649 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 17 17:51:57.222656 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 17 17:51:57.222663 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 17 17:51:57.222670 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 17 17:51:57.222677 kernel: iommu: Default domain type: Translated Mar 17 17:51:57.222684 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:51:57.222691 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:51:57.222701 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:51:57.222708 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:51:57.222715 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Mar 17 17:51:57.222835 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 17 17:51:57.223006 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 17 17:51:57.223152 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:51:57.223167 kernel: vgaarb: loaded Mar 17 17:51:57.223178 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:51:57.223195 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:51:57.223207 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:51:57.223216 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:51:57.223225 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:51:57.223234 kernel: pnp: PnP ACPI init Mar 17 17:51:57.223460 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 17 17:51:57.223477 kernel: pnp: PnP ACPI: found 6 devices Mar 17 17:51:57.223487 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:51:57.223501 kernel: NET: Registered PF_INET protocol family Mar 17 17:51:57.223511 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 17:51:57.223522 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 17:51:57.223532 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:51:57.223543 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:51:57.223553 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 17 17:51:57.223571 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 17:51:57.223581 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:51:57.223592 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 17:51:57.223605 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:51:57.223616 kernel: NET: Registered PF_XDP protocol family Mar 17 17:51:57.223754 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:51:57.223870 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:51:57.223978 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:51:57.224087 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Mar 17 17:51:57.224196 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 17 17:51:57.224315 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Mar 17 17:51:57.224330 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:51:57.224337 kernel: Initialise system trusted keyrings Mar 17 17:51:57.224345 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 17:51:57.224352 kernel: Key type asymmetric registered Mar 17 17:51:57.224359 kernel: Asymmetric key parser 'x509' registered Mar 17 17:51:57.224366 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:51:57.224373 kernel: io scheduler mq-deadline registered Mar 17 17:51:57.224381 kernel: io scheduler kyber registered Mar 17 17:51:57.224388 kernel: io scheduler bfq registered Mar 17 17:51:57.224397 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:51:57.224465 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 17 17:51:57.224472 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 17 17:51:57.224480 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 17 17:51:57.224487 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:51:57.224494 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:51:57.224502 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:51:57.224509 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:51:57.224516 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:51:57.224656 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 17 17:51:57.224667 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:51:57.224776 kernel: rtc_cmos 00:04: registered as rtc0 Mar 17 17:51:57.224885 kernel: rtc_cmos 00:04: setting system clock to 2025-03-17T17:51:56 UTC (1742233916) Mar 17 17:51:57.224995 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Mar 17 17:51:57.225004 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 17 17:51:57.225011 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:51:57.225018 kernel: Segment Routing with IPv6 Mar 17 17:51:57.225029 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:51:57.225036 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:51:57.225043 kernel: Key type dns_resolver registered Mar 17 17:51:57.225050 kernel: IPI shorthand broadcast: enabled Mar 17 17:51:57.225058 kernel: sched_clock: Marking stable (667003652, 397854261)->(1626318657, -561460744) Mar 17 17:51:57.225065 kernel: registered taskstats version 1 Mar 17 17:51:57.225072 kernel: Loading compiled-in X.509 certificates Mar 17 17:51:57.225079 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:51:57.225086 kernel: Key type .fscrypt registered Mar 17 17:51:57.225096 kernel: Key type fscrypt-provisioning registered Mar 17 17:51:57.225103 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:51:57.225110 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:51:57.225117 kernel: ima: No architecture policies found Mar 17 17:51:57.225124 kernel: clk: Disabling unused clocks Mar 17 17:51:57.225131 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:51:57.225139 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:51:57.225146 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:51:57.225153 kernel: Run /init as init process Mar 17 17:51:57.225162 kernel: with arguments: Mar 17 17:51:57.225169 kernel: /init Mar 17 17:51:57.225176 kernel: with environment: Mar 17 17:51:57.225183 kernel: HOME=/ Mar 17 17:51:57.225191 kernel: TERM=linux Mar 17 17:51:57.225198 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:51:57.225208 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:51:57.225217 systemd[1]: Detected virtualization kvm. Mar 17 17:51:57.225227 systemd[1]: Detected architecture x86-64. Mar 17 17:51:57.225235 systemd[1]: Running in initrd. Mar 17 17:51:57.225242 systemd[1]: No hostname configured, using default hostname. Mar 17 17:51:57.225250 systemd[1]: Hostname set to . Mar 17 17:51:57.225258 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:51:57.225265 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:51:57.225273 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:51:57.225281 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:51:57.225292 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:51:57.225311 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:51:57.225321 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:51:57.225329 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:51:57.225339 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:51:57.225349 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:51:57.225357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:51:57.225365 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:51:57.225373 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:51:57.225381 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:51:57.225389 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:51:57.225397 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:51:57.225417 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:51:57.225428 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:51:57.225436 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:51:57.225444 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:51:57.225451 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:51:57.225463 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:51:57.225478 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:51:57.225491 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:51:57.225500 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:51:57.225508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:51:57.225520 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:51:57.225528 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:51:57.225536 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:51:57.225544 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:51:57.225552 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:57.225568 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:51:57.225576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:51:57.225584 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:51:57.225613 systemd-journald[194]: Collecting audit messages is disabled. Mar 17 17:51:57.225634 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:51:57.225645 systemd-journald[194]: Journal started Mar 17 17:51:57.225665 systemd-journald[194]: Runtime Journal (/run/log/journal/60590e952a364951b421d19e24dda72f) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:51:57.219229 systemd-modules-load[195]: Inserted module 'overlay' Mar 17 17:51:57.256152 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:51:57.256184 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:51:57.256201 kernel: Bridge firewalling registered Mar 17 17:51:57.251651 systemd-modules-load[195]: Inserted module 'br_netfilter' Mar 17 17:51:57.258389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:51:57.259430 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:57.274037 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:51:57.276078 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:51:57.277004 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:51:57.288887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:51:57.290249 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:51:57.293821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:51:57.296526 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:51:57.299042 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:51:57.304958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:51:57.307783 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:57.322383 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:51:57.339646 dracut-cmdline[231]: dracut-dracut-053 Mar 17 17:51:57.342906 dracut-cmdline[231]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:51:57.349279 systemd-resolved[224]: Positive Trust Anchors: Mar 17 17:51:57.349299 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:51:57.349330 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:51:57.352299 systemd-resolved[224]: Defaulting to hostname 'linux'. Mar 17 17:51:57.353520 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:51:57.359099 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:51:57.437462 kernel: SCSI subsystem initialized Mar 17 17:51:57.450451 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:51:57.464459 kernel: iscsi: registered transport (tcp) Mar 17 17:51:57.485519 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:51:57.485588 kernel: QLogic iSCSI HBA Driver Mar 17 17:51:57.538079 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:51:57.546590 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:51:57.573533 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:51:57.573606 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:51:57.574652 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:51:57.618446 kernel: raid6: avx2x4 gen() 25315 MB/s Mar 17 17:51:57.635429 kernel: raid6: avx2x2 gen() 28467 MB/s Mar 17 17:51:57.652802 kernel: raid6: avx2x1 gen() 18476 MB/s Mar 17 17:51:57.652829 kernel: raid6: using algorithm avx2x2 gen() 28467 MB/s Mar 17 17:51:57.670675 kernel: raid6: .... xor() 17669 MB/s, rmw enabled Mar 17 17:51:57.670703 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:51:57.691429 kernel: xor: automatically using best checksumming function avx Mar 17 17:51:57.853449 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:51:57.869193 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:51:57.881742 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:51:57.894250 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 17 17:51:57.898857 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:51:57.912663 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:51:57.928728 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Mar 17 17:51:57.963162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:51:57.976587 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:51:58.045033 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:51:58.056657 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:51:58.074523 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 17 17:51:58.100719 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:51:58.100739 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 17:51:58.100924 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:51:58.100939 kernel: GPT:9289727 != 19775487 Mar 17 17:51:58.100952 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:51:58.100964 kernel: GPT:9289727 != 19775487 Mar 17 17:51:58.100988 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:51:58.101001 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:51:58.078225 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:51:58.080460 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:51:58.081962 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:51:58.083436 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:51:58.091615 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:51:58.102755 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:51:58.113496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:51:58.113636 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:58.118601 kernel: libata version 3.00 loaded. Mar 17 17:51:58.119305 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:51:58.128226 kernel: ahci 0000:00:1f.2: version 3.0 Mar 17 17:51:58.144813 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 17 17:51:58.144839 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 17 17:51:58.145032 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 17 17:51:58.145184 kernel: scsi host0: ahci Mar 17 17:51:58.145358 kernel: scsi host1: ahci Mar 17 17:51:58.145633 kernel: scsi host2: ahci Mar 17 17:51:58.145826 kernel: scsi host3: ahci Mar 17 17:51:58.146026 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:51:58.146038 kernel: scsi host4: ahci Mar 17 17:51:58.146188 kernel: AES CTR mode by8 optimization enabled Mar 17 17:51:58.146199 kernel: scsi host5: ahci Mar 17 17:51:58.146376 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Mar 17 17:51:58.146388 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Mar 17 17:51:58.146398 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Mar 17 17:51:58.146455 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Mar 17 17:51:58.146466 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Mar 17 17:51:58.146476 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Mar 17 17:51:58.123551 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:51:58.123752 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:58.125027 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:58.139624 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:51:58.161289 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (476) Mar 17 17:51:58.176173 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (468) Mar 17 17:51:58.176057 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:51:58.204841 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:51:58.224999 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:51:58.225625 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:51:58.232487 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:51:58.242203 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:51:58.253642 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:51:58.256315 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:51:58.264932 disk-uuid[556]: Primary Header is updated. Mar 17 17:51:58.264932 disk-uuid[556]: Secondary Entries is updated. Mar 17 17:51:58.264932 disk-uuid[556]: Secondary Header is updated. Mar 17 17:51:58.269426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:51:58.294434 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:51:58.294709 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:51:58.452451 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 17 17:51:58.452552 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 17 17:51:58.460430 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 17 17:51:58.460469 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 17 17:51:58.461441 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 17 17:51:58.461461 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 17 17:51:58.462833 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 17 17:51:58.462849 kernel: ata3.00: applying bridge limits Mar 17 17:51:58.463900 kernel: ata3.00: configured for UDMA/100 Mar 17 17:51:58.464440 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 17 17:51:58.515475 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 17 17:51:58.528299 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 17 17:51:58.528317 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 17 17:51:59.316423 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:51:59.316505 disk-uuid[557]: The operation has completed successfully. Mar 17 17:51:59.337957 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:51:59.338088 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:51:59.373645 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:51:59.377168 sh[594]: Success Mar 17 17:51:59.390468 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 17 17:51:59.429754 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:51:59.452116 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:51:59.455886 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:51:59.469424 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:51:59.469460 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:51:59.469476 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:51:59.469508 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:51:59.470842 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:51:59.475693 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:51:59.478336 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:51:59.491741 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:51:59.510520 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:51:59.516072 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:51:59.516097 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:51:59.516110 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:51:59.518432 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:51:59.527941 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:51:59.529498 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:51:59.694790 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:51:59.714588 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:51:59.737395 systemd-networkd[772]: lo: Link UP Mar 17 17:51:59.737492 systemd-networkd[772]: lo: Gained carrier Mar 17 17:51:59.739439 systemd-networkd[772]: Enumeration completed Mar 17 17:51:59.739691 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:51:59.739974 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:59.739979 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:51:59.741513 systemd-networkd[772]: eth0: Link UP Mar 17 17:51:59.741518 systemd-networkd[772]: eth0: Gained carrier Mar 17 17:51:59.741526 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:51:59.742111 systemd[1]: Reached target network.target - Network. Mar 17 17:51:59.764477 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:51:59.792778 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:51:59.806685 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:51:59.982446 ignition[777]: Ignition 2.20.0 Mar 17 17:51:59.982468 ignition[777]: Stage: fetch-offline Mar 17 17:51:59.982548 ignition[777]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:51:59.982561 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:51:59.982712 ignition[777]: parsed url from cmdline: "" Mar 17 17:51:59.982717 ignition[777]: no config URL provided Mar 17 17:51:59.982724 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:51:59.982736 ignition[777]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:51:59.982776 ignition[777]: op(1): [started] loading QEMU firmware config module Mar 17 17:51:59.982782 ignition[777]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 17:51:59.993827 ignition[777]: op(1): [finished] loading QEMU firmware config module Mar 17 17:51:59.993862 ignition[777]: QEMU firmware config was not found. Ignoring... Mar 17 17:52:00.033003 ignition[777]: parsing config with SHA512: e576245a1fa248f4bfbf4b090c335b5408771038bb10d2418a95213bb2f06e40357300023cb4c23d333a37027f614dd7f06452e635e9348d6dc0b1c68a830f53 Mar 17 17:52:00.038197 unknown[777]: fetched base config from "system" Mar 17 17:52:00.038212 unknown[777]: fetched user config from "qemu" Mar 17 17:52:00.040615 ignition[777]: fetch-offline: fetch-offline passed Mar 17 17:52:00.040794 ignition[777]: Ignition finished successfully Mar 17 17:52:00.042935 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:52:00.044140 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 17:52:00.057654 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:52:00.077610 ignition[787]: Ignition 2.20.0 Mar 17 17:52:00.077628 ignition[787]: Stage: kargs Mar 17 17:52:00.077843 ignition[787]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:52:00.077857 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:52:00.078893 ignition[787]: kargs: kargs passed Mar 17 17:52:00.078947 ignition[787]: Ignition finished successfully Mar 17 17:52:00.084442 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:52:00.097898 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:52:00.111520 ignition[794]: Ignition 2.20.0 Mar 17 17:52:00.111530 ignition[794]: Stage: disks Mar 17 17:52:00.111687 ignition[794]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:52:00.111698 ignition[794]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:52:00.112488 ignition[794]: disks: disks passed Mar 17 17:52:00.112532 ignition[794]: Ignition finished successfully Mar 17 17:52:00.118697 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:52:00.120048 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:52:00.122040 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:52:00.123399 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:52:00.124506 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:52:00.126913 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:52:00.138681 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:52:00.154546 systemd-fsck[805]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:52:00.164128 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:52:00.178549 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:52:00.270442 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:52:00.271228 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:52:00.273451 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:52:00.286496 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:52:00.288885 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:52:00.291286 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 17 17:52:00.291340 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:52:00.293366 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:52:00.295428 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (813) Mar 17 17:52:00.297681 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:52:00.297730 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:52:00.297745 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:52:00.301437 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:52:00.304070 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:52:00.305878 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:52:00.309376 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:52:00.348070 initrd-setup-root[837]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:52:00.353217 initrd-setup-root[844]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:52:00.357160 initrd-setup-root[851]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:52:00.360840 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:52:00.446492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:52:00.459492 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:52:00.462749 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:52:00.467068 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:52:00.468144 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:52:00.492158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:52:00.500004 ignition[926]: INFO : Ignition 2.20.0 Mar 17 17:52:00.500004 ignition[926]: INFO : Stage: mount Mar 17 17:52:00.501678 ignition[926]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:52:00.501678 ignition[926]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:52:00.501678 ignition[926]: INFO : mount: mount passed Mar 17 17:52:00.501678 ignition[926]: INFO : Ignition finished successfully Mar 17 17:52:00.502999 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:52:00.509626 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:52:00.517252 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:52:00.527451 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (939) Mar 17 17:52:00.527504 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:52:00.529446 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:52:00.529487 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:52:00.532428 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:52:00.534831 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:52:00.553264 ignition[956]: INFO : Ignition 2.20.0 Mar 17 17:52:00.553264 ignition[956]: INFO : Stage: files Mar 17 17:52:00.555503 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:52:00.555503 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:52:00.555503 ignition[956]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:52:00.555503 ignition[956]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:52:00.555503 ignition[956]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:52:00.562423 ignition[956]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:52:00.562423 ignition[956]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:52:00.562423 ignition[956]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:52:00.562423 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:52:00.562423 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 17 17:52:00.558557 unknown[956]: wrote ssh authorized keys file for user: core Mar 17 17:52:00.601372 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 17 17:52:00.731471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 17 17:52:00.731471 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:52:00.735551 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:52:00.737189 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:52:00.739298 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 17:52:00.739298 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:52:00.742936 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:52:00.744767 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:52:01.229608 systemd-networkd[772]: eth0: Gained IPv6LL Mar 17 17:52:01.238670 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 17 17:52:01.606100 ignition[956]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:52:01.606100 ignition[956]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 17 17:52:01.615525 ignition[956]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 17:52:01.634231 ignition[956]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:52:01.672516 ignition[956]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 17:52:01.672516 ignition[956]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 17:52:01.672516 ignition[956]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 17 17:52:01.672516 ignition[956]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 17:52:01.672516 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:52:01.681476 ignition[956]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:52:01.681476 ignition[956]: INFO : files: files passed Mar 17 17:52:01.681476 ignition[956]: INFO : Ignition finished successfully Mar 17 17:52:01.674959 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:52:01.683699 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:52:01.685916 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:52:01.687830 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:52:01.687939 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:52:01.696690 initrd-setup-root-after-ignition[984]: grep: /sysroot/oem/oem-release: No such file or directory Mar 17 17:52:01.699462 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:52:01.699462 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:52:01.736464 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:52:01.740740 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:52:01.741284 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:52:01.750669 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:52:01.777694 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:52:01.777828 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:52:01.808917 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:52:01.811723 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:52:01.812081 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:52:01.823545 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:52:01.837856 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:52:01.869648 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:52:01.882376 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:52:01.911076 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:52:01.914518 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:52:01.915075 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:52:01.915190 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:52:01.919638 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:52:01.920159 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:52:01.920504 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:52:01.924894 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:52:01.927053 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:52:01.929305 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:52:01.931683 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:52:01.965214 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:52:01.967710 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:52:01.968052 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:52:01.968336 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:52:01.968467 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:52:02.016680 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:52:02.017229 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:52:02.017715 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:52:02.017804 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:52:02.022198 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:52:02.022324 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:52:02.024765 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:52:02.024897 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:52:02.027889 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:52:02.029860 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:52:02.034483 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:52:02.034962 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:52:02.037775 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:52:02.038085 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:52:02.038175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:52:02.041033 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:52:02.041116 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:52:02.043015 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:52:02.043121 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:52:02.045017 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:52:02.045121 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:52:02.059569 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:52:02.059829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:52:02.059938 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:52:02.062672 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:52:02.064080 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:52:02.064196 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:52:02.064549 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:52:02.064646 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:52:02.068807 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:52:02.068919 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:52:02.101911 ignition[1011]: INFO : Ignition 2.20.0 Mar 17 17:52:02.101911 ignition[1011]: INFO : Stage: umount Mar 17 17:52:02.103758 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:52:02.103758 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 17:52:02.103758 ignition[1011]: INFO : umount: umount passed Mar 17 17:52:02.103758 ignition[1011]: INFO : Ignition finished successfully Mar 17 17:52:02.105256 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:52:02.105433 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:52:02.106648 systemd[1]: Stopped target network.target - Network. Mar 17 17:52:02.108136 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:52:02.108189 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:52:02.149023 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:52:02.149076 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:52:02.149353 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:52:02.149402 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:52:02.149735 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:52:02.149789 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:52:02.150217 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:52:02.156606 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:52:02.160448 systemd-networkd[772]: eth0: DHCPv6 lease lost Mar 17 17:52:02.163396 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:52:02.163602 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:52:02.164078 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:52:02.164115 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:52:02.173524 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:52:02.209611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:52:02.209671 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:52:02.210062 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:52:02.210564 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:52:02.210682 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:52:02.218219 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:52:02.218302 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:52:02.221761 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:52:02.221821 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:52:02.222285 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:52:02.222326 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:52:02.228238 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:52:02.228369 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:52:02.229892 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:52:02.230076 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:52:02.232391 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:52:02.232478 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:52:02.265990 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:52:02.266034 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:52:02.266279 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:52:02.266330 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:52:02.266975 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:52:02.267027 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:52:02.267639 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:52:02.267695 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:52:02.329568 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:52:02.330398 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:52:02.330472 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:52:02.332670 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:52:02.332717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:52:02.336541 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:52:02.336648 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:52:02.351499 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:52:03.185022 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:52:03.185172 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:52:03.238398 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:52:03.240047 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:52:03.240130 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:52:03.250618 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:52:03.259018 systemd[1]: Switching root. Mar 17 17:52:03.373571 systemd-journald[194]: Journal stopped Mar 17 17:52:05.388916 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Mar 17 17:52:05.389005 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:52:05.389024 kernel: SELinux: policy capability open_perms=1 Mar 17 17:52:05.389036 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:52:05.389047 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:52:05.389060 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:52:05.389076 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:52:05.389091 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:52:05.389102 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:52:05.389113 kernel: audit: type=1403 audit(1742233924.571:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:52:05.389130 systemd[1]: Successfully loaded SELinux policy in 46.347ms. Mar 17 17:52:05.389154 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.159ms. Mar 17 17:52:05.389167 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:52:05.389184 systemd[1]: Detected virtualization kvm. Mar 17 17:52:05.389196 systemd[1]: Detected architecture x86-64. Mar 17 17:52:05.389211 systemd[1]: Detected first boot. Mar 17 17:52:05.389223 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:52:05.389235 zram_generator::config[1055]: No configuration found. Mar 17 17:52:05.389249 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:52:05.389261 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 17 17:52:05.389273 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 17 17:52:05.389285 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 17 17:52:05.389298 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:52:05.389311 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:52:05.389333 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:52:05.389346 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:52:05.389360 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:52:05.389373 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:52:05.389386 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:52:05.389398 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:52:05.389422 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:52:05.389435 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:52:05.389450 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:52:05.389462 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:52:05.389474 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:52:05.389487 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:52:05.389503 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:52:05.389518 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:52:05.389531 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 17 17:52:05.389543 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 17 17:52:05.389555 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 17 17:52:05.389570 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:52:05.389582 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:52:05.389595 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:52:05.389607 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:52:05.389619 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:52:05.390289 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:52:05.390335 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:52:05.390352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:52:05.390377 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:52:05.390393 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:52:05.390427 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:52:05.390441 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:52:05.390454 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:52:05.390471 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:52:05.390484 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:05.390496 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:52:05.390508 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:52:05.390523 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:52:05.390537 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:52:05.390549 systemd[1]: Reached target machines.target - Containers. Mar 17 17:52:05.390561 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:52:05.390575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:52:05.390587 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:52:05.390600 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:52:05.390612 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:52:05.390627 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:52:05.390639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:52:05.390651 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:52:05.390663 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:52:05.390676 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:52:05.390688 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 17 17:52:05.390701 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 17 17:52:05.390712 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 17 17:52:05.390724 systemd[1]: Stopped systemd-fsck-usr.service. Mar 17 17:52:05.390739 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:52:05.390751 kernel: fuse: init (API version 7.39) Mar 17 17:52:05.390803 systemd-journald[1118]: Collecting audit messages is disabled. Mar 17 17:52:05.390828 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:52:05.390840 kernel: loop: module loaded Mar 17 17:52:05.390852 systemd-journald[1118]: Journal started Mar 17 17:52:05.390877 systemd-journald[1118]: Runtime Journal (/run/log/journal/60590e952a364951b421d19e24dda72f) is 6.0M, max 48.4M, 42.3M free. Mar 17 17:52:05.147884 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:52:05.167362 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:52:05.167906 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 17 17:52:05.395435 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:52:05.402347 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:52:05.410914 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:52:05.410957 systemd[1]: verity-setup.service: Deactivated successfully. Mar 17 17:52:05.410973 systemd[1]: Stopped verity-setup.service. Mar 17 17:52:05.410988 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:05.441990 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:52:05.443064 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:52:05.443442 kernel: ACPI: bus type drm_connector registered Mar 17 17:52:05.444708 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:52:05.446011 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:52:05.447315 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:52:05.448807 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:52:05.450064 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:52:05.451306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:52:05.452978 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:52:05.453167 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:52:05.454704 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:52:05.454874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:52:05.456385 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:52:05.456572 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:52:05.457924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:52:05.458092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:52:05.459736 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:52:05.459903 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:52:05.461653 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:52:05.461849 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:52:05.463236 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:52:05.485550 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:52:05.487363 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:52:05.502146 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:52:05.515590 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:52:05.522495 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:52:05.523896 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:52:05.523998 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:52:05.526187 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:52:05.529030 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:52:05.534576 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:52:05.535944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:52:05.537707 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:52:05.540767 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:52:05.542292 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:52:05.544634 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:52:05.546225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:52:05.553542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:52:05.576638 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:52:05.576925 systemd-journald[1118]: Time spent on flushing to /var/log/journal/60590e952a364951b421d19e24dda72f is 13.700ms for 949 entries. Mar 17 17:52:05.576925 systemd-journald[1118]: System Journal (/var/log/journal/60590e952a364951b421d19e24dda72f) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:52:05.816907 systemd-journald[1118]: Received client request to flush runtime journal. Mar 17 17:52:05.816965 kernel: loop0: detected capacity change from 0 to 138184 Mar 17 17:52:05.816992 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:52:05.817008 kernel: loop1: detected capacity change from 0 to 140992 Mar 17 17:52:05.580722 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:52:05.582366 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:52:05.583827 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:52:05.585467 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:52:05.602688 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:52:05.626355 udevadm[1167]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 17 17:52:05.634168 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:52:05.745103 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:52:05.759670 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:52:05.809695 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:52:05.811330 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:52:05.826630 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:52:05.828595 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:52:05.830353 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:52:05.836052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:52:05.846445 kernel: loop2: detected capacity change from 0 to 210664 Mar 17 17:52:05.905644 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 17 17:52:05.905665 systemd-tmpfiles[1189]: ACLs are not supported, ignoring. Mar 17 17:52:05.913601 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:52:05.935456 kernel: loop3: detected capacity change from 0 to 138184 Mar 17 17:52:05.976451 kernel: loop4: detected capacity change from 0 to 140992 Mar 17 17:52:05.982039 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:52:05.984566 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:52:05.990997 kernel: loop5: detected capacity change from 0 to 210664 Mar 17 17:52:05.999165 (sd-merge)[1193]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 17 17:52:05.999805 (sd-merge)[1193]: Merged extensions into '/usr'. Mar 17 17:52:06.003967 systemd[1]: Reloading requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:52:06.003983 systemd[1]: Reloading... Mar 17 17:52:06.072432 zram_generator::config[1223]: No configuration found. Mar 17 17:52:06.172357 ldconfig[1156]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:52:06.190551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:06.240725 systemd[1]: Reloading finished in 236 ms. Mar 17 17:52:06.271619 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:52:06.273169 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:52:06.291669 systemd[1]: Starting ensure-sysext.service... Mar 17 17:52:06.294145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:52:06.301042 systemd[1]: Reloading requested from client PID 1257 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:52:06.301058 systemd[1]: Reloading... Mar 17 17:52:06.317087 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:52:06.317779 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:52:06.318876 systemd-tmpfiles[1258]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:52:06.319175 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 17 17:52:06.319254 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Mar 17 17:52:06.322788 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:52:06.322802 systemd-tmpfiles[1258]: Skipping /boot Mar 17 17:52:06.337180 systemd-tmpfiles[1258]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:52:06.339431 systemd-tmpfiles[1258]: Skipping /boot Mar 17 17:52:06.364434 zram_generator::config[1286]: No configuration found. Mar 17 17:52:06.475183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:06.524917 systemd[1]: Reloading finished in 223 ms. Mar 17 17:52:06.542176 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:52:06.554880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:52:06.563767 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:52:06.566221 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:52:06.568791 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:52:06.573794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:52:06.580639 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:52:06.584755 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:52:06.590026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:06.590260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:52:06.592656 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:52:06.599705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:52:06.602254 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:52:06.603696 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:52:06.605679 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:52:06.606704 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:06.607610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:52:06.608098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:52:06.610529 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:52:06.615800 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:52:06.618451 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:52:06.620812 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:52:06.621023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:52:06.621314 systemd-udevd[1328]: Using default interface naming scheme 'v255'. Mar 17 17:52:06.633751 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:52:06.637002 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:06.637632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:52:06.647267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:52:06.650953 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:52:06.651080 augenrules[1359]: No rules Mar 17 17:52:06.655547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:52:06.657480 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:52:06.659326 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:52:06.660800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:06.661889 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:52:06.663911 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:52:06.666296 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:52:06.666639 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:52:06.669003 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:52:06.671346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:52:06.671599 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:52:06.679773 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:52:06.679970 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:52:06.683263 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:52:06.683481 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:52:06.685579 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:52:06.702921 systemd[1]: Finished ensure-sysext.service. Mar 17 17:52:06.707880 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:06.717724 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:52:06.719161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:52:06.725215 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:52:06.729181 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:52:06.733783 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:52:06.738923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:52:06.740133 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:52:06.742071 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:52:06.746265 systemd-resolved[1327]: Positive Trust Anchors: Mar 17 17:52:06.747975 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1377) Mar 17 17:52:06.746585 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:52:06.747348 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:52:06.747380 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:52:06.747878 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:52:06.747905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:52:06.748505 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:52:06.750467 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:52:06.752119 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:52:06.754534 augenrules[1398]: /sbin/augenrules: No change Mar 17 17:52:06.752330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:52:06.755049 systemd-resolved[1327]: Defaulting to hostname 'linux'. Mar 17 17:52:06.762848 augenrules[1424]: No rules Mar 17 17:52:06.763477 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:52:06.766295 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:52:06.767217 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:52:06.771333 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:52:06.772304 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:52:06.774306 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:52:06.774739 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:52:06.790166 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 17 17:52:06.805808 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:52:06.808929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:52:06.818608 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:52:06.820010 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:52:06.820078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:52:06.829702 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 17 17:52:06.836524 systemd-networkd[1412]: lo: Link UP Mar 17 17:52:06.836539 systemd-networkd[1412]: lo: Gained carrier Mar 17 17:52:06.838484 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 17 17:52:06.838515 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:52:06.839891 systemd-networkd[1412]: Enumeration completed Mar 17 17:52:06.840302 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:52:06.840311 systemd-networkd[1412]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:52:06.840469 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:52:06.842270 systemd-networkd[1412]: eth0: Link UP Mar 17 17:52:06.842293 systemd-networkd[1412]: eth0: Gained carrier Mar 17 17:52:06.842310 systemd-networkd[1412]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:52:06.843766 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:52:06.846049 systemd[1]: Reached target network.target - Network. Mar 17 17:52:06.850964 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 17 17:52:06.851474 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 17 17:52:06.851863 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 17 17:52:06.856600 systemd-networkd[1412]: eth0: DHCPv4 address 10.0.0.117/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 17:52:06.856718 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:52:06.870999 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:52:07.451208 systemd-resolved[1327]: Clock change detected. Flushing caches. Mar 17 17:52:07.451554 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:52:07.453855 systemd-timesyncd[1413]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 17:52:07.453914 systemd-timesyncd[1413]: Initial clock synchronization to Mon 2025-03-17 17:52:07.450801 UTC. Mar 17 17:52:07.469976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:52:07.471825 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:52:07.544814 kernel: kvm_amd: TSC scaling supported Mar 17 17:52:07.545008 kernel: kvm_amd: Nested Virtualization enabled Mar 17 17:52:07.545039 kernel: kvm_amd: Nested Paging enabled Mar 17 17:52:07.545064 kernel: kvm_amd: LBR virtualization supported Mar 17 17:52:07.545089 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 17 17:52:07.545114 kernel: kvm_amd: Virtual GIF supported Mar 17 17:52:07.564824 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:52:07.574021 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:52:07.609106 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:52:07.630978 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:52:07.640042 lvm[1455]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:52:07.678165 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:52:07.679770 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:52:07.680924 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:52:07.682104 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:52:07.683394 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:52:07.684904 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:52:07.686156 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:52:07.687414 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:52:07.688659 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:52:07.688685 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:52:07.689776 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:52:07.691495 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:52:07.694176 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:52:07.703901 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:52:07.707150 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:52:07.709065 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:52:07.710470 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:52:07.711655 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:52:07.712690 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:52:07.712725 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:52:07.714029 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:52:07.716724 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:52:07.719893 lvm[1459]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:52:07.721893 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:52:07.725105 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:52:07.726505 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:52:07.728917 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:52:07.730593 jq[1462]: false Mar 17 17:52:07.735987 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 17 17:52:07.739006 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:52:07.741651 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:52:07.750910 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:52:07.752437 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:52:07.752961 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:52:07.753691 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:52:07.756179 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:52:07.759826 extend-filesystems[1463]: Found loop3 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found loop4 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found loop5 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found sr0 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found vda Mar 17 17:52:07.759826 extend-filesystems[1463]: Found vda1 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found vda2 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found vda3 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found usr Mar 17 17:52:07.759826 extend-filesystems[1463]: Found vda4 Mar 17 17:52:07.759826 extend-filesystems[1463]: Found vda6 Mar 17 17:52:07.759833 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:52:07.776972 extend-filesystems[1463]: Found vda7 Mar 17 17:52:07.776972 extend-filesystems[1463]: Found vda9 Mar 17 17:52:07.776972 extend-filesystems[1463]: Checking size of /dev/vda9 Mar 17 17:52:07.765149 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:52:07.777765 dbus-daemon[1461]: [system] SELinux support is enabled Mar 17 17:52:07.765368 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:52:07.765718 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:52:07.783732 extend-filesystems[1463]: Resized partition /dev/vda9 Mar 17 17:52:07.771713 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:52:07.785168 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:52:07.787637 jq[1476]: true Mar 17 17:52:07.793363 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:52:07.797602 extend-filesystems[1486]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:52:07.812124 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1374) Mar 17 17:52:07.812150 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 17:52:07.793577 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:52:07.812233 update_engine[1475]: I20250317 17:52:07.810063 1475 main.cc:92] Flatcar Update Engine starting Mar 17 17:52:07.815765 jq[1487]: true Mar 17 17:52:07.815972 update_engine[1475]: I20250317 17:52:07.815259 1475 update_check_scheduler.cc:74] Next update check in 5m52s Mar 17 17:52:07.815984 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:52:07.837730 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 17:52:07.859856 tar[1485]: linux-amd64/helm Mar 17 17:52:07.846231 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:52:07.851107 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:52:07.851138 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:52:07.853105 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:52:07.853123 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:52:07.860568 systemd-logind[1472]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:52:07.860591 systemd-logind[1472]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:52:07.860916 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:52:07.863183 systemd-logind[1472]: New seat seat0. Mar 17 17:52:07.863455 extend-filesystems[1486]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:52:07.863455 extend-filesystems[1486]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 17:52:07.863455 extend-filesystems[1486]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 17:52:07.866075 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:52:07.874078 extend-filesystems[1463]: Resized filesystem in /dev/vda9 Mar 17 17:52:07.866275 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:52:07.878089 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:52:07.884965 bash[1514]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:52:07.885940 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:52:07.888948 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 17 17:52:07.894020 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:52:08.038203 containerd[1488]: time="2025-03-17T17:52:08.038068770Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:52:08.044894 sshd_keygen[1481]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:52:08.063151 containerd[1488]: time="2025-03-17T17:52:08.063089985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:52:08.064706 containerd[1488]: time="2025-03-17T17:52:08.064661522Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:52:08.064706 containerd[1488]: time="2025-03-17T17:52:08.064692660Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:52:08.064706 containerd[1488]: time="2025-03-17T17:52:08.064707869Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:52:08.064951 containerd[1488]: time="2025-03-17T17:52:08.064922120Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:52:08.064951 containerd[1488]: time="2025-03-17T17:52:08.064943400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065020 containerd[1488]: time="2025-03-17T17:52:08.065010626Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065055 containerd[1488]: time="2025-03-17T17:52:08.065022238Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065250 containerd[1488]: time="2025-03-17T17:52:08.065218045Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065250 containerd[1488]: time="2025-03-17T17:52:08.065238103Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065250 containerd[1488]: time="2025-03-17T17:52:08.065250897Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065315 containerd[1488]: time="2025-03-17T17:52:08.065261406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065369 containerd[1488]: time="2025-03-17T17:52:08.065350413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065617 containerd[1488]: time="2025-03-17T17:52:08.065579844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065737 containerd[1488]: time="2025-03-17T17:52:08.065715598Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:52:08.065737 containerd[1488]: time="2025-03-17T17:52:08.065732329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:52:08.065865 containerd[1488]: time="2025-03-17T17:52:08.065846253Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:52:08.065920 containerd[1488]: time="2025-03-17T17:52:08.065903160Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:52:08.071236 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:52:08.085198 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:52:08.091175 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:52:08.091381 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:52:08.103175 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:52:08.108816 containerd[1488]: time="2025-03-17T17:52:08.108773151Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:52:08.108894 containerd[1488]: time="2025-03-17T17:52:08.108840157Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:52:08.108894 containerd[1488]: time="2025-03-17T17:52:08.108863330Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:52:08.108894 containerd[1488]: time="2025-03-17T17:52:08.108880833Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:52:08.108979 containerd[1488]: time="2025-03-17T17:52:08.108894058Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:52:08.109261 containerd[1488]: time="2025-03-17T17:52:08.109074336Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:52:08.109328 containerd[1488]: time="2025-03-17T17:52:08.109310218Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:52:08.109437 containerd[1488]: time="2025-03-17T17:52:08.109422248Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:52:08.109462 containerd[1488]: time="2025-03-17T17:52:08.109442526Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:52:08.109462 containerd[1488]: time="2025-03-17T17:52:08.109456262Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:52:08.109531 containerd[1488]: time="2025-03-17T17:52:08.109468775Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109531 containerd[1488]: time="2025-03-17T17:52:08.109481469Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109531 containerd[1488]: time="2025-03-17T17:52:08.109501507Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109531 containerd[1488]: time="2025-03-17T17:52:08.109523318Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109641 containerd[1488]: time="2025-03-17T17:52:08.109538215Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109641 containerd[1488]: time="2025-03-17T17:52:08.109551020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109641 containerd[1488]: time="2025-03-17T17:52:08.109562982Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109641 containerd[1488]: time="2025-03-17T17:52:08.109574003Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:52:08.109641 containerd[1488]: time="2025-03-17T17:52:08.109600132Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109641 containerd[1488]: time="2025-03-17T17:52:08.109627954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109641 containerd[1488]: time="2025-03-17T17:52:08.109641529Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109652770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109664412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109676695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109687435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109700530Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109712382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109726038Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109738190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109763247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109774669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109787613Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:52:08.109809 containerd[1488]: time="2025-03-17T17:52:08.109813061Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.110054 containerd[1488]: time="2025-03-17T17:52:08.109825053Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.110054 containerd[1488]: time="2025-03-17T17:52:08.109835603Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:52:08.110507 containerd[1488]: time="2025-03-17T17:52:08.110483668Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:52:08.110533 containerd[1488]: time="2025-03-17T17:52:08.110513314Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:52:08.110533 containerd[1488]: time="2025-03-17T17:52:08.110527280Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:52:08.110582 containerd[1488]: time="2025-03-17T17:52:08.110538772Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:52:08.110582 containerd[1488]: time="2025-03-17T17:52:08.110547568Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.110582 containerd[1488]: time="2025-03-17T17:52:08.110559020Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:52:08.110582 containerd[1488]: time="2025-03-17T17:52:08.110570381Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:52:08.110582 containerd[1488]: time="2025-03-17T17:52:08.110579939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:52:08.110941 containerd[1488]: time="2025-03-17T17:52:08.110885051Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:52:08.110941 containerd[1488]: time="2025-03-17T17:52:08.110930336Z" level=info msg="Connect containerd service" Mar 17 17:52:08.111141 containerd[1488]: time="2025-03-17T17:52:08.110976542Z" level=info msg="using legacy CRI server" Mar 17 17:52:08.111141 containerd[1488]: time="2025-03-17T17:52:08.110984658Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:52:08.111141 containerd[1488]: time="2025-03-17T17:52:08.111099242Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:52:08.111675 containerd[1488]: time="2025-03-17T17:52:08.111644936Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:52:08.112089 containerd[1488]: time="2025-03-17T17:52:08.111829151Z" level=info msg="Start subscribing containerd event" Mar 17 17:52:08.112089 containerd[1488]: time="2025-03-17T17:52:08.112025950Z" level=info msg="Start recovering state" Mar 17 17:52:08.112089 containerd[1488]: time="2025-03-17T17:52:08.111978281Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:52:08.112720 containerd[1488]: time="2025-03-17T17:52:08.112135596Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:52:08.114383 containerd[1488]: time="2025-03-17T17:52:08.114358363Z" level=info msg="Start event monitor" Mar 17 17:52:08.114383 containerd[1488]: time="2025-03-17T17:52:08.114382759Z" level=info msg="Start snapshots syncer" Mar 17 17:52:08.114450 containerd[1488]: time="2025-03-17T17:52:08.114395022Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:52:08.114450 containerd[1488]: time="2025-03-17T17:52:08.114407946Z" level=info msg="Start streaming server" Mar 17 17:52:08.114568 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:52:08.119810 containerd[1488]: time="2025-03-17T17:52:08.119786261Z" level=info msg="containerd successfully booted in 0.082748s" Mar 17 17:52:08.121088 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:52:08.131163 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:52:08.133576 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:52:08.134855 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:52:08.258559 tar[1485]: linux-amd64/LICENSE Mar 17 17:52:08.258735 tar[1485]: linux-amd64/README.md Mar 17 17:52:08.279080 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 17 17:52:08.847997 systemd-networkd[1412]: eth0: Gained IPv6LL Mar 17 17:52:08.851224 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:52:08.864142 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:52:08.879140 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 17 17:52:08.941257 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:08.943830 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:52:08.964304 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 17 17:52:08.964538 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 17 17:52:08.966179 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:52:08.968805 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:52:09.592134 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:09.594543 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:52:09.596344 systemd[1]: Startup finished in 917ms (kernel) + 7.750s (initrd) + 4.490s (userspace) = 13.159s. Mar 17 17:52:09.621277 (kubelet)[1577]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:10.127480 kubelet[1577]: E0317 17:52:10.127412 1577 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:10.131640 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:10.131858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:10.132189 systemd[1]: kubelet.service: Consumed 1.027s CPU time. Mar 17 17:52:12.504229 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:52:12.506079 systemd[1]: Started sshd@0-10.0.0.117:22-10.0.0.1:56584.service - OpenSSH per-connection server daemon (10.0.0.1:56584). Mar 17 17:52:12.562094 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 56584 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:12.564701 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:12.575277 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:52:12.595267 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:52:12.598037 systemd-logind[1472]: New session 1 of user core. Mar 17 17:52:12.616742 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:52:12.641462 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:52:12.646571 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:52:12.783018 systemd[1595]: Queued start job for default target default.target. Mar 17 17:52:12.796719 systemd[1595]: Created slice app.slice - User Application Slice. Mar 17 17:52:12.796774 systemd[1595]: Reached target paths.target - Paths. Mar 17 17:52:12.796793 systemd[1595]: Reached target timers.target - Timers. Mar 17 17:52:12.799444 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:52:12.814109 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:52:12.814291 systemd[1595]: Reached target sockets.target - Sockets. Mar 17 17:52:12.814313 systemd[1595]: Reached target basic.target - Basic System. Mar 17 17:52:12.814374 systemd[1595]: Reached target default.target - Main User Target. Mar 17 17:52:12.814422 systemd[1595]: Startup finished in 156ms. Mar 17 17:52:12.815417 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:52:12.817934 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:52:12.897796 systemd[1]: Started sshd@1-10.0.0.117:22-10.0.0.1:56596.service - OpenSSH per-connection server daemon (10.0.0.1:56596). Mar 17 17:52:12.949793 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 56596 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:12.951929 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:12.958588 systemd-logind[1472]: New session 2 of user core. Mar 17 17:52:12.969052 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:52:13.026234 sshd[1608]: Connection closed by 10.0.0.1 port 56596 Mar 17 17:52:13.026700 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:13.038436 systemd[1]: sshd@1-10.0.0.117:22-10.0.0.1:56596.service: Deactivated successfully. Mar 17 17:52:13.040570 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:52:13.042049 systemd-logind[1472]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:52:13.053098 systemd[1]: Started sshd@2-10.0.0.117:22-10.0.0.1:56608.service - OpenSSH per-connection server daemon (10.0.0.1:56608). Mar 17 17:52:13.054358 systemd-logind[1472]: Removed session 2. Mar 17 17:52:13.089217 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 56608 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:13.091435 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:13.096136 systemd-logind[1472]: New session 3 of user core. Mar 17 17:52:13.105988 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:52:13.156846 sshd[1615]: Connection closed by 10.0.0.1 port 56608 Mar 17 17:52:13.157246 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:13.175372 systemd[1]: sshd@2-10.0.0.117:22-10.0.0.1:56608.service: Deactivated successfully. Mar 17 17:52:13.177555 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:52:13.179256 systemd-logind[1472]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:52:13.187156 systemd[1]: Started sshd@3-10.0.0.117:22-10.0.0.1:56620.service - OpenSSH per-connection server daemon (10.0.0.1:56620). Mar 17 17:52:13.188200 systemd-logind[1472]: Removed session 3. Mar 17 17:52:13.224527 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 56620 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:13.226121 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:13.230017 systemd-logind[1472]: New session 4 of user core. Mar 17 17:52:13.245887 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:52:13.300617 sshd[1622]: Connection closed by 10.0.0.1 port 56620 Mar 17 17:52:13.300969 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:13.312842 systemd[1]: sshd@3-10.0.0.117:22-10.0.0.1:56620.service: Deactivated successfully. Mar 17 17:52:13.314894 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:52:13.316232 systemd-logind[1472]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:52:13.317840 systemd[1]: Started sshd@4-10.0.0.117:22-10.0.0.1:56630.service - OpenSSH per-connection server daemon (10.0.0.1:56630). Mar 17 17:52:13.318769 systemd-logind[1472]: Removed session 4. Mar 17 17:52:13.356306 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 56630 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:13.357936 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:13.361800 systemd-logind[1472]: New session 5 of user core. Mar 17 17:52:13.377980 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:52:13.437112 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:52:13.437542 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:13.460514 sudo[1630]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:13.462159 sshd[1629]: Connection closed by 10.0.0.1 port 56630 Mar 17 17:52:13.462611 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:13.474411 systemd[1]: sshd@4-10.0.0.117:22-10.0.0.1:56630.service: Deactivated successfully. Mar 17 17:52:13.475953 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:52:13.477511 systemd-logind[1472]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:52:13.478997 systemd[1]: Started sshd@5-10.0.0.117:22-10.0.0.1:56634.service - OpenSSH per-connection server daemon (10.0.0.1:56634). Mar 17 17:52:13.479810 systemd-logind[1472]: Removed session 5. Mar 17 17:52:13.517115 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 56634 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:13.518631 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:13.522616 systemd-logind[1472]: New session 6 of user core. Mar 17 17:52:13.529880 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:52:13.584743 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:52:13.585173 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:13.590012 sudo[1639]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:13.597335 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:52:13.597687 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:13.620122 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:52:13.658192 augenrules[1661]: No rules Mar 17 17:52:13.660421 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:52:13.660720 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:52:13.663724 sudo[1638]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:13.665530 sshd[1637]: Connection closed by 10.0.0.1 port 56634 Mar 17 17:52:13.665954 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:13.676271 systemd[1]: sshd@5-10.0.0.117:22-10.0.0.1:56634.service: Deactivated successfully. Mar 17 17:52:13.678687 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:52:13.680314 systemd-logind[1472]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:52:13.692191 systemd[1]: Started sshd@6-10.0.0.117:22-10.0.0.1:56640.service - OpenSSH per-connection server daemon (10.0.0.1:56640). Mar 17 17:52:13.693373 systemd-logind[1472]: Removed session 6. Mar 17 17:52:13.730044 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 56640 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:52:13.731633 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:52:13.736042 systemd-logind[1472]: New session 7 of user core. Mar 17 17:52:13.754029 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:52:13.808885 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:52:13.809223 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:52:14.260022 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 17 17:52:14.260148 (dockerd)[1693]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 17 17:52:14.773491 dockerd[1693]: time="2025-03-17T17:52:14.773403391Z" level=info msg="Starting up" Mar 17 17:52:15.464143 dockerd[1693]: time="2025-03-17T17:52:15.464093194Z" level=info msg="Loading containers: start." Mar 17 17:52:15.776777 kernel: Initializing XFRM netlink socket Mar 17 17:52:15.866501 systemd-networkd[1412]: docker0: Link UP Mar 17 17:52:15.900337 dockerd[1693]: time="2025-03-17T17:52:15.900283070Z" level=info msg="Loading containers: done." Mar 17 17:52:15.919858 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3412954617-merged.mount: Deactivated successfully. Mar 17 17:52:15.922009 dockerd[1693]: time="2025-03-17T17:52:15.921962559Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 17:52:15.922133 dockerd[1693]: time="2025-03-17T17:52:15.922089447Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Mar 17 17:52:15.922331 dockerd[1693]: time="2025-03-17T17:52:15.922303047Z" level=info msg="Daemon has completed initialization" Mar 17 17:52:15.969712 dockerd[1693]: time="2025-03-17T17:52:15.969615749Z" level=info msg="API listen on /run/docker.sock" Mar 17 17:52:15.969861 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 17 17:52:17.047933 containerd[1488]: time="2025-03-17T17:52:17.047886146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 17:52:17.670993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount192115622.mount: Deactivated successfully. Mar 17 17:52:19.056986 containerd[1488]: time="2025-03-17T17:52:19.056937276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:19.057939 containerd[1488]: time="2025-03-17T17:52:19.057910781Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 17 17:52:19.059422 containerd[1488]: time="2025-03-17T17:52:19.059398621Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:19.062863 containerd[1488]: time="2025-03-17T17:52:19.062813283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:19.063774 containerd[1488]: time="2025-03-17T17:52:19.063723971Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.01579207s" Mar 17 17:52:19.063774 containerd[1488]: time="2025-03-17T17:52:19.063771219Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 17 17:52:19.092316 containerd[1488]: time="2025-03-17T17:52:19.092263723Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 17:52:20.382123 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:52:20.419231 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:20.590801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:20.603224 (kubelet)[1969]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:20.727456 kubelet[1969]: E0317 17:52:20.727287 1969 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:20.734512 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:20.734772 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:21.756931 containerd[1488]: time="2025-03-17T17:52:21.756871782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:21.758864 containerd[1488]: time="2025-03-17T17:52:21.758826297Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 17 17:52:21.760430 containerd[1488]: time="2025-03-17T17:52:21.760397924Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:21.763614 containerd[1488]: time="2025-03-17T17:52:21.763565122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:21.764632 containerd[1488]: time="2025-03-17T17:52:21.764592108Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.672280455s" Mar 17 17:52:21.764632 containerd[1488]: time="2025-03-17T17:52:21.764626322Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 17 17:52:21.788898 containerd[1488]: time="2025-03-17T17:52:21.788844722Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 17:52:22.881856 containerd[1488]: time="2025-03-17T17:52:22.881775436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:22.882930 containerd[1488]: time="2025-03-17T17:52:22.882435364Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 17 17:52:22.883606 containerd[1488]: time="2025-03-17T17:52:22.883554572Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:22.886249 containerd[1488]: time="2025-03-17T17:52:22.886215031Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:22.887256 containerd[1488]: time="2025-03-17T17:52:22.887217741Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.098331972s" Mar 17 17:52:22.887256 containerd[1488]: time="2025-03-17T17:52:22.887251925Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 17 17:52:22.910674 containerd[1488]: time="2025-03-17T17:52:22.910631281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:52:24.164245 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount750737085.mount: Deactivated successfully. Mar 17 17:52:24.981689 containerd[1488]: time="2025-03-17T17:52:24.981601199Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:24.984591 containerd[1488]: time="2025-03-17T17:52:24.984534970Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:52:24.985926 containerd[1488]: time="2025-03-17T17:52:24.985885582Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:24.990239 containerd[1488]: time="2025-03-17T17:52:24.990182489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:24.990967 containerd[1488]: time="2025-03-17T17:52:24.990936743Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.080265517s" Mar 17 17:52:24.991030 containerd[1488]: time="2025-03-17T17:52:24.990969635Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:52:25.018266 containerd[1488]: time="2025-03-17T17:52:25.018204410Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 17:52:25.596235 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3746258542.mount: Deactivated successfully. Mar 17 17:52:26.950201 containerd[1488]: time="2025-03-17T17:52:26.950142854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:26.950957 containerd[1488]: time="2025-03-17T17:52:26.950924710Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 17 17:52:26.953529 containerd[1488]: time="2025-03-17T17:52:26.953494519Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:26.956442 containerd[1488]: time="2025-03-17T17:52:26.956394516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:26.957565 containerd[1488]: time="2025-03-17T17:52:26.957527510Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.939276032s" Mar 17 17:52:26.957636 containerd[1488]: time="2025-03-17T17:52:26.957564500Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 17 17:52:26.980633 containerd[1488]: time="2025-03-17T17:52:26.980571318Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 17:52:27.463190 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount117437634.mount: Deactivated successfully. Mar 17 17:52:27.469835 containerd[1488]: time="2025-03-17T17:52:27.469747873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:27.470625 containerd[1488]: time="2025-03-17T17:52:27.470543435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 17 17:52:27.472058 containerd[1488]: time="2025-03-17T17:52:27.472004705Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:27.474671 containerd[1488]: time="2025-03-17T17:52:27.474620810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:27.475516 containerd[1488]: time="2025-03-17T17:52:27.475446899Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 494.601217ms" Mar 17 17:52:27.475516 containerd[1488]: time="2025-03-17T17:52:27.475501732Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 17 17:52:27.497994 containerd[1488]: time="2025-03-17T17:52:27.497951776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 17:52:28.027571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634056995.mount: Deactivated successfully. Mar 17 17:52:30.985060 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 17:52:30.990968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:31.141480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:31.147791 (kubelet)[2123]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:52:31.202546 kubelet[2123]: E0317 17:52:31.202483 2123 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:52:31.207338 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:52:31.207605 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:52:32.607326 containerd[1488]: time="2025-03-17T17:52:32.607238687Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:32.608371 containerd[1488]: time="2025-03-17T17:52:32.608288877Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 17 17:52:32.609709 containerd[1488]: time="2025-03-17T17:52:32.609658665Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:32.613003 containerd[1488]: time="2025-03-17T17:52:32.612963842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:32.614216 containerd[1488]: time="2025-03-17T17:52:32.614174382Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 5.116179645s" Mar 17 17:52:32.614216 containerd[1488]: time="2025-03-17T17:52:32.614213004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 17 17:52:35.348843 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:35.360002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:35.377537 systemd[1]: Reloading requested from client PID 2216 ('systemctl') (unit session-7.scope)... Mar 17 17:52:35.377555 systemd[1]: Reloading... Mar 17 17:52:35.464180 zram_generator::config[2258]: No configuration found. Mar 17 17:52:35.761830 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:35.837434 systemd[1]: Reloading finished in 459 ms. Mar 17 17:52:35.884574 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:35.889304 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:52:35.889539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:35.891120 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:36.039489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:36.055231 (kubelet)[2305]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:36.099708 kubelet[2305]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:36.099708 kubelet[2305]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:52:36.099708 kubelet[2305]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:36.100201 kubelet[2305]: I0317 17:52:36.099765 2305 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:52:36.423194 kubelet[2305]: I0317 17:52:36.423087 2305 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:52:36.423194 kubelet[2305]: I0317 17:52:36.423116 2305 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:52:36.423764 kubelet[2305]: I0317 17:52:36.423728 2305 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:52:36.437915 kubelet[2305]: I0317 17:52:36.437818 2305 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:52:36.438579 kubelet[2305]: E0317 17:52:36.438363 2305 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.448280 kubelet[2305]: I0317 17:52:36.448253 2305 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:52:36.448535 kubelet[2305]: I0317 17:52:36.448503 2305 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:52:36.448691 kubelet[2305]: I0317 17:52:36.448534 2305 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:52:36.449200 kubelet[2305]: I0317 17:52:36.449183 2305 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:52:36.449200 kubelet[2305]: I0317 17:52:36.449198 2305 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:52:36.449352 kubelet[2305]: I0317 17:52:36.449338 2305 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:36.449992 kubelet[2305]: I0317 17:52:36.449969 2305 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:52:36.449992 kubelet[2305]: I0317 17:52:36.449984 2305 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:52:36.450061 kubelet[2305]: I0317 17:52:36.450003 2305 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:52:36.450061 kubelet[2305]: I0317 17:52:36.450017 2305 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:52:36.452138 kubelet[2305]: W0317 17:52:36.452089 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.452279 kubelet[2305]: E0317 17:52:36.452242 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.453320 kubelet[2305]: W0317 17:52:36.453275 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.453320 kubelet[2305]: E0317 17:52:36.453319 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.454576 kubelet[2305]: I0317 17:52:36.454548 2305 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:52:36.455832 kubelet[2305]: I0317 17:52:36.455810 2305 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:52:36.455881 kubelet[2305]: W0317 17:52:36.455871 2305 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:52:36.456880 kubelet[2305]: I0317 17:52:36.456561 2305 server.go:1264] "Started kubelet" Mar 17 17:52:36.457576 kubelet[2305]: I0317 17:52:36.457280 2305 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:52:36.458873 kubelet[2305]: I0317 17:52:36.458833 2305 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:52:36.460396 kubelet[2305]: I0317 17:52:36.460280 2305 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:52:36.461252 kubelet[2305]: I0317 17:52:36.461177 2305 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:52:36.463739 kubelet[2305]: E0317 17:52:36.463414 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da88d7a370a66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:52:36.456532582 +0000 UTC m=+0.396432913,LastTimestamp:2025-03-17 17:52:36.456532582 +0000 UTC m=+0.396432913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:52:36.464304 kubelet[2305]: I0317 17:52:36.464288 2305 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:52:36.464459 kubelet[2305]: I0317 17:52:36.464432 2305 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:52:36.464510 kubelet[2305]: I0317 17:52:36.464497 2305 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:52:36.465720 kubelet[2305]: E0317 17:52:36.465150 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="200ms" Mar 17 17:52:36.465720 kubelet[2305]: W0317 17:52:36.465276 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.465720 kubelet[2305]: E0317 17:52:36.465331 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.466191 kubelet[2305]: I0317 17:52:36.466115 2305 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:52:36.466248 kubelet[2305]: I0317 17:52:36.466201 2305 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:52:36.466248 kubelet[2305]: I0317 17:52:36.466221 2305 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:52:36.467488 kubelet[2305]: I0317 17:52:36.467402 2305 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:52:36.469163 kubelet[2305]: E0317 17:52:36.469120 2305 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:52:36.484667 kubelet[2305]: I0317 17:52:36.484583 2305 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:52:36.484667 kubelet[2305]: I0317 17:52:36.484606 2305 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:52:36.484667 kubelet[2305]: I0317 17:52:36.484639 2305 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:36.487237 kubelet[2305]: I0317 17:52:36.487206 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:52:36.488912 kubelet[2305]: I0317 17:52:36.488718 2305 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:52:36.488912 kubelet[2305]: I0317 17:52:36.488840 2305 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:52:36.489014 kubelet[2305]: I0317 17:52:36.488916 2305 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:52:36.489014 kubelet[2305]: E0317 17:52:36.488979 2305 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:52:36.489561 kubelet[2305]: W0317 17:52:36.489528 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.489607 kubelet[2305]: E0317 17:52:36.489572 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:36.565981 kubelet[2305]: I0317 17:52:36.565952 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:52:36.566428 kubelet[2305]: E0317 17:52:36.566404 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Mar 17 17:52:36.589704 kubelet[2305]: E0317 17:52:36.589636 2305 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:52:36.666499 kubelet[2305]: E0317 17:52:36.666428 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="400ms" Mar 17 17:52:36.768218 kubelet[2305]: I0317 17:52:36.768101 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:52:36.768481 kubelet[2305]: E0317 17:52:36.768461 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Mar 17 17:52:36.790597 kubelet[2305]: E0317 17:52:36.790569 2305 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 17 17:52:37.067800 kubelet[2305]: E0317 17:52:37.067634 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="800ms" Mar 17 17:52:37.071826 kubelet[2305]: I0317 17:52:37.071805 2305 policy_none.go:49] "None policy: Start" Mar 17 17:52:37.072633 kubelet[2305]: I0317 17:52:37.072609 2305 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:52:37.072692 kubelet[2305]: I0317 17:52:37.072641 2305 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:52:37.129016 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 17 17:52:37.146403 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 17 17:52:37.149735 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 17 17:52:37.166985 kubelet[2305]: I0317 17:52:37.166930 2305 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:52:37.167461 kubelet[2305]: I0317 17:52:37.167279 2305 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:52:37.167461 kubelet[2305]: I0317 17:52:37.167442 2305 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:52:37.168982 kubelet[2305]: E0317 17:52:37.168952 2305 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 17:52:37.169734 kubelet[2305]: I0317 17:52:37.169651 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:52:37.170130 kubelet[2305]: E0317 17:52:37.170085 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Mar 17 17:52:37.191557 kubelet[2305]: I0317 17:52:37.191495 2305 topology_manager.go:215] "Topology Admit Handler" podUID="4b165de2c7352244d77c97a8b00e0161" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:52:37.192643 kubelet[2305]: I0317 17:52:37.192605 2305 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:52:37.193523 kubelet[2305]: I0317 17:52:37.193483 2305 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:52:37.201093 systemd[1]: Created slice kubepods-burstable-pod4b165de2c7352244d77c97a8b00e0161.slice - libcontainer container kubepods-burstable-pod4b165de2c7352244d77c97a8b00e0161.slice. Mar 17 17:52:37.222600 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 17 17:52:37.227601 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 17 17:52:37.267927 kubelet[2305]: I0317 17:52:37.267846 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:37.267927 kubelet[2305]: I0317 17:52:37.267908 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:37.267927 kubelet[2305]: I0317 17:52:37.267937 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b165de2c7352244d77c97a8b00e0161-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b165de2c7352244d77c97a8b00e0161\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:52:37.268131 kubelet[2305]: I0317 17:52:37.267960 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b165de2c7352244d77c97a8b00e0161-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b165de2c7352244d77c97a8b00e0161\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:52:37.268131 kubelet[2305]: I0317 17:52:37.267983 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b165de2c7352244d77c97a8b00e0161-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b165de2c7352244d77c97a8b00e0161\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:52:37.268131 kubelet[2305]: I0317 17:52:37.268073 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:37.268131 kubelet[2305]: I0317 17:52:37.268115 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:37.268223 kubelet[2305]: I0317 17:52:37.268165 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:37.268223 kubelet[2305]: I0317 17:52:37.268210 2305 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:52:37.298606 kubelet[2305]: W0317 17:52:37.298473 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.298606 kubelet[2305]: E0317 17:52:37.298601 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.117:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.377574 kubelet[2305]: W0317 17:52:37.377412 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.377574 kubelet[2305]: E0317 17:52:37.377477 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.117:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.522363 kubelet[2305]: E0317 17:52:37.522290 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:37.523071 containerd[1488]: time="2025-03-17T17:52:37.523021779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b165de2c7352244d77c97a8b00e0161,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:37.526484 kubelet[2305]: E0317 17:52:37.526458 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:37.526962 containerd[1488]: time="2025-03-17T17:52:37.526926260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:37.530349 kubelet[2305]: E0317 17:52:37.530316 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:37.530874 containerd[1488]: time="2025-03-17T17:52:37.530810794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:37.683881 kubelet[2305]: W0317 17:52:37.683684 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.683881 kubelet[2305]: E0317 17:52:37.683804 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.117:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.868713 kubelet[2305]: E0317 17:52:37.868653 2305 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.117:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.117:6443: connect: connection refused" interval="1.6s" Mar 17 17:52:37.937634 kubelet[2305]: W0317 17:52:37.937494 2305 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.937634 kubelet[2305]: E0317 17:52:37.937545 2305 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.117:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:37.972150 kubelet[2305]: I0317 17:52:37.972100 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:52:37.972428 kubelet[2305]: E0317 17:52:37.972400 2305 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.117:6443/api/v1/nodes\": dial tcp 10.0.0.117:6443: connect: connection refused" node="localhost" Mar 17 17:52:37.984366 kubelet[2305]: E0317 17:52:37.984220 2305 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.117:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.117:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182da88d7a370a66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:52:36.456532582 +0000 UTC m=+0.396432913,LastTimestamp:2025-03-17 17:52:36.456532582 +0000 UTC m=+0.396432913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:52:38.270692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1015682183.mount: Deactivated successfully. Mar 17 17:52:38.277055 containerd[1488]: time="2025-03-17T17:52:38.277009186Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:38.280477 containerd[1488]: time="2025-03-17T17:52:38.280439948Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:52:38.281562 containerd[1488]: time="2025-03-17T17:52:38.281535422Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:38.283427 containerd[1488]: time="2025-03-17T17:52:38.283380722Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:38.284079 containerd[1488]: time="2025-03-17T17:52:38.284031382Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:52:38.285196 containerd[1488]: time="2025-03-17T17:52:38.285162132Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:38.286031 containerd[1488]: time="2025-03-17T17:52:38.285998000Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:52:38.287400 containerd[1488]: time="2025-03-17T17:52:38.287371024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:52:38.288776 containerd[1488]: time="2025-03-17T17:52:38.288551407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 765.422968ms" Mar 17 17:52:38.292273 containerd[1488]: time="2025-03-17T17:52:38.292230906Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 761.291271ms" Mar 17 17:52:38.293553 containerd[1488]: time="2025-03-17T17:52:38.293530112Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 766.506009ms" Mar 17 17:52:38.416121 containerd[1488]: time="2025-03-17T17:52:38.415824164Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:38.416121 containerd[1488]: time="2025-03-17T17:52:38.415898433Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:38.416121 containerd[1488]: time="2025-03-17T17:52:38.415917288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:38.416121 containerd[1488]: time="2025-03-17T17:52:38.416009511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:38.417305 containerd[1488]: time="2025-03-17T17:52:38.415658453Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:38.417453 containerd[1488]: time="2025-03-17T17:52:38.417304730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:38.417453 containerd[1488]: time="2025-03-17T17:52:38.417327072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:38.417547 containerd[1488]: time="2025-03-17T17:52:38.417424094Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:38.418581 containerd[1488]: time="2025-03-17T17:52:38.418469934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:38.418581 containerd[1488]: time="2025-03-17T17:52:38.418542631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:38.418581 containerd[1488]: time="2025-03-17T17:52:38.418561326Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:38.418805 containerd[1488]: time="2025-03-17T17:52:38.418668577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:38.440924 systemd[1]: Started cri-containerd-6048ceab357bf0dea65141cc5fd2bd4752aafc267c8d8e9c0f1aba63ab21a82b.scope - libcontainer container 6048ceab357bf0dea65141cc5fd2bd4752aafc267c8d8e9c0f1aba63ab21a82b. Mar 17 17:52:38.446913 systemd[1]: Started cri-containerd-0f586a0c790984b62afd5f723aac3f8c9cba653d3e12db4b31dadae75c0dcd98.scope - libcontainer container 0f586a0c790984b62afd5f723aac3f8c9cba653d3e12db4b31dadae75c0dcd98. Mar 17 17:52:38.449256 systemd[1]: Started cri-containerd-ecd874e832d5092c2c1260dd7c874c900639d75eb80670dc90b10d95cb2f85f8.scope - libcontainer container ecd874e832d5092c2c1260dd7c874c900639d75eb80670dc90b10d95cb2f85f8. Mar 17 17:52:38.486868 containerd[1488]: time="2025-03-17T17:52:38.485414686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6048ceab357bf0dea65141cc5fd2bd4752aafc267c8d8e9c0f1aba63ab21a82b\"" Mar 17 17:52:38.488500 kubelet[2305]: E0317 17:52:38.488425 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:38.492694 containerd[1488]: time="2025-03-17T17:52:38.492372462Z" level=info msg="CreateContainer within sandbox \"6048ceab357bf0dea65141cc5fd2bd4752aafc267c8d8e9c0f1aba63ab21a82b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 17:52:38.495840 containerd[1488]: time="2025-03-17T17:52:38.495641932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4b165de2c7352244d77c97a8b00e0161,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f586a0c790984b62afd5f723aac3f8c9cba653d3e12db4b31dadae75c0dcd98\"" Mar 17 17:52:38.496694 kubelet[2305]: E0317 17:52:38.496659 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:38.498744 containerd[1488]: time="2025-03-17T17:52:38.498703913Z" level=info msg="CreateContainer within sandbox \"0f586a0c790984b62afd5f723aac3f8c9cba653d3e12db4b31dadae75c0dcd98\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 17:52:38.503996 containerd[1488]: time="2025-03-17T17:52:38.503965288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecd874e832d5092c2c1260dd7c874c900639d75eb80670dc90b10d95cb2f85f8\"" Mar 17 17:52:38.504509 kubelet[2305]: E0317 17:52:38.504487 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:38.506909 containerd[1488]: time="2025-03-17T17:52:38.506870906Z" level=info msg="CreateContainer within sandbox \"ecd874e832d5092c2c1260dd7c874c900639d75eb80670dc90b10d95cb2f85f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 17:52:38.514201 containerd[1488]: time="2025-03-17T17:52:38.514113496Z" level=info msg="CreateContainer within sandbox \"6048ceab357bf0dea65141cc5fd2bd4752aafc267c8d8e9c0f1aba63ab21a82b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"17fc5e0dd448b5390016e85ae01be70ba798ab258bd7c109b3e3a5b98088df42\"" Mar 17 17:52:38.514704 containerd[1488]: time="2025-03-17T17:52:38.514668156Z" level=info msg="StartContainer for \"17fc5e0dd448b5390016e85ae01be70ba798ab258bd7c109b3e3a5b98088df42\"" Mar 17 17:52:38.527919 containerd[1488]: time="2025-03-17T17:52:38.526937221Z" level=info msg="CreateContainer within sandbox \"0f586a0c790984b62afd5f723aac3f8c9cba653d3e12db4b31dadae75c0dcd98\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca02c074da86bbcf38569d2e4580a06a9d3831c1009fc559f19de1cd00902564\"" Mar 17 17:52:38.527919 containerd[1488]: time="2025-03-17T17:52:38.527430606Z" level=info msg="StartContainer for \"ca02c074da86bbcf38569d2e4580a06a9d3831c1009fc559f19de1cd00902564\"" Mar 17 17:52:38.539243 containerd[1488]: time="2025-03-17T17:52:38.539019335Z" level=info msg="CreateContainer within sandbox \"ecd874e832d5092c2c1260dd7c874c900639d75eb80670dc90b10d95cb2f85f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"54903cafc0b9f0267d2290781e464206c57135169af21b1057867a5418edb86f\"" Mar 17 17:52:38.539497 containerd[1488]: time="2025-03-17T17:52:38.539467004Z" level=info msg="StartContainer for \"54903cafc0b9f0267d2290781e464206c57135169af21b1057867a5418edb86f\"" Mar 17 17:52:38.542949 systemd[1]: Started cri-containerd-17fc5e0dd448b5390016e85ae01be70ba798ab258bd7c109b3e3a5b98088df42.scope - libcontainer container 17fc5e0dd448b5390016e85ae01be70ba798ab258bd7c109b3e3a5b98088df42. Mar 17 17:52:38.559413 systemd[1]: Started cri-containerd-ca02c074da86bbcf38569d2e4580a06a9d3831c1009fc559f19de1cd00902564.scope - libcontainer container ca02c074da86bbcf38569d2e4580a06a9d3831c1009fc559f19de1cd00902564. Mar 17 17:52:38.576868 systemd[1]: Started cri-containerd-54903cafc0b9f0267d2290781e464206c57135169af21b1057867a5418edb86f.scope - libcontainer container 54903cafc0b9f0267d2290781e464206c57135169af21b1057867a5418edb86f. Mar 17 17:52:38.598326 containerd[1488]: time="2025-03-17T17:52:38.598173108Z" level=info msg="StartContainer for \"17fc5e0dd448b5390016e85ae01be70ba798ab258bd7c109b3e3a5b98088df42\" returns successfully" Mar 17 17:52:38.615795 containerd[1488]: time="2025-03-17T17:52:38.615712534Z" level=info msg="StartContainer for \"ca02c074da86bbcf38569d2e4580a06a9d3831c1009fc559f19de1cd00902564\" returns successfully" Mar 17 17:52:38.623162 containerd[1488]: time="2025-03-17T17:52:38.623119122Z" level=info msg="StartContainer for \"54903cafc0b9f0267d2290781e464206c57135169af21b1057867a5418edb86f\" returns successfully" Mar 17 17:52:38.623687 kubelet[2305]: E0317 17:52:38.623667 2305 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.117:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.117:6443: connect: connection refused Mar 17 17:52:39.506321 kubelet[2305]: E0317 17:52:39.506281 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:39.508764 kubelet[2305]: E0317 17:52:39.506822 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:39.508764 kubelet[2305]: E0317 17:52:39.508204 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:39.574410 kubelet[2305]: I0317 17:52:39.574371 2305 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:52:39.691554 kubelet[2305]: I0317 17:52:39.691427 2305 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:52:39.702523 kubelet[2305]: E0317 17:52:39.702475 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:39.803284 kubelet[2305]: E0317 17:52:39.803138 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:39.903962 kubelet[2305]: E0317 17:52:39.903918 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.004957 kubelet[2305]: E0317 17:52:40.004874 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.105675 kubelet[2305]: E0317 17:52:40.105547 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.206450 kubelet[2305]: E0317 17:52:40.206392 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.307039 kubelet[2305]: E0317 17:52:40.306979 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.407855 kubelet[2305]: E0317 17:52:40.407706 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.507966 kubelet[2305]: E0317 17:52:40.507906 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.510771 kubelet[2305]: E0317 17:52:40.510717 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:40.510984 kubelet[2305]: E0317 17:52:40.510958 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:40.608382 kubelet[2305]: E0317 17:52:40.608319 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:40.709054 kubelet[2305]: E0317 17:52:40.708885 2305 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 17 17:52:41.454022 kubelet[2305]: I0317 17:52:41.453976 2305 apiserver.go:52] "Watching apiserver" Mar 17 17:52:41.465051 kubelet[2305]: I0317 17:52:41.465015 2305 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:52:41.560285 kubelet[2305]: E0317 17:52:41.560241 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:42.512850 kubelet[2305]: E0317 17:52:42.512802 2305 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:42.977738 systemd[1]: Reloading requested from client PID 2587 ('systemctl') (unit session-7.scope)... Mar 17 17:52:42.977781 systemd[1]: Reloading... Mar 17 17:52:43.050801 zram_generator::config[2629]: No configuration found. Mar 17 17:52:43.161855 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:52:43.252329 systemd[1]: Reloading finished in 274 ms. Mar 17 17:52:43.304691 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:43.305211 kubelet[2305]: E0317 17:52:43.304686 2305 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{localhost.182da88d7a370a66 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 17:52:36.456532582 +0000 UTC m=+0.396432913,LastTimestamp:2025-03-17 17:52:36.456532582 +0000 UTC m=+0.396432913,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 17:52:43.305211 kubelet[2305]: I0317 17:52:43.304945 2305 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:52:43.318518 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:52:43.318918 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:43.327001 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:52:43.473899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:52:43.479385 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:52:43.529289 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:43.529289 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:52:43.529289 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:52:43.529289 kubelet[2671]: I0317 17:52:43.529241 2671 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:52:43.534143 kubelet[2671]: I0317 17:52:43.533649 2671 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:52:43.534143 kubelet[2671]: I0317 17:52:43.533666 2671 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:52:43.534143 kubelet[2671]: I0317 17:52:43.533830 2671 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:52:43.534891 kubelet[2671]: I0317 17:52:43.534863 2671 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 17:52:43.537310 kubelet[2671]: I0317 17:52:43.537264 2671 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:52:43.545472 kubelet[2671]: I0317 17:52:43.545443 2671 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:52:43.545707 kubelet[2671]: I0317 17:52:43.545665 2671 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:52:43.545903 kubelet[2671]: I0317 17:52:43.545708 2671 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:52:43.545975 kubelet[2671]: I0317 17:52:43.545921 2671 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:52:43.545975 kubelet[2671]: I0317 17:52:43.545931 2671 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:52:43.546019 kubelet[2671]: I0317 17:52:43.545979 2671 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:43.546109 kubelet[2671]: I0317 17:52:43.546095 2671 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:52:43.546139 kubelet[2671]: I0317 17:52:43.546109 2671 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:52:43.546139 kubelet[2671]: I0317 17:52:43.546130 2671 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:52:43.546180 kubelet[2671]: I0317 17:52:43.546144 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:52:43.550899 kubelet[2671]: I0317 17:52:43.550874 2671 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:52:43.551079 kubelet[2671]: I0317 17:52:43.551063 2671 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:52:43.551448 kubelet[2671]: I0317 17:52:43.551432 2671 server.go:1264] "Started kubelet" Mar 17 17:52:43.552781 kubelet[2671]: I0317 17:52:43.552068 2671 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:52:43.552781 kubelet[2671]: I0317 17:52:43.552185 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:52:43.552781 kubelet[2671]: I0317 17:52:43.552410 2671 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:52:43.555654 kubelet[2671]: I0317 17:52:43.555236 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:52:43.555654 kubelet[2671]: I0317 17:52:43.555570 2671 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:52:43.556096 kubelet[2671]: I0317 17:52:43.556071 2671 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:52:43.556390 kubelet[2671]: I0317 17:52:43.556368 2671 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:52:43.557823 kubelet[2671]: I0317 17:52:43.556521 2671 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:52:43.557823 kubelet[2671]: I0317 17:52:43.556664 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:52:43.557823 kubelet[2671]: I0317 17:52:43.557455 2671 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:52:43.561882 kubelet[2671]: I0317 17:52:43.561839 2671 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:52:43.568521 kubelet[2671]: E0317 17:52:43.567851 2671 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:52:43.573644 kubelet[2671]: I0317 17:52:43.573589 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:52:43.574894 kubelet[2671]: I0317 17:52:43.574873 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:52:43.574958 kubelet[2671]: I0317 17:52:43.574921 2671 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:52:43.574958 kubelet[2671]: I0317 17:52:43.574941 2671 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:52:43.575008 kubelet[2671]: E0317 17:52:43.574995 2671 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 17:52:43.593932 kubelet[2671]: I0317 17:52:43.593901 2671 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:52:43.593932 kubelet[2671]: I0317 17:52:43.593919 2671 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:52:43.593932 kubelet[2671]: I0317 17:52:43.593940 2671 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:52:43.594135 kubelet[2671]: I0317 17:52:43.594090 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 17:52:43.594135 kubelet[2671]: I0317 17:52:43.594100 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 17:52:43.594135 kubelet[2671]: I0317 17:52:43.594118 2671 policy_none.go:49] "None policy: Start" Mar 17 17:52:43.594678 kubelet[2671]: I0317 17:52:43.594658 2671 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:52:43.594720 kubelet[2671]: I0317 17:52:43.594681 2671 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:52:43.594879 kubelet[2671]: I0317 17:52:43.594852 2671 state_mem.go:75] "Updated machine memory state" Mar 17 17:52:43.599055 kubelet[2671]: I0317 17:52:43.599026 2671 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:52:43.599303 kubelet[2671]: I0317 17:52:43.599243 2671 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:52:43.599484 kubelet[2671]: I0317 17:52:43.599390 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:52:43.660492 kubelet[2671]: I0317 17:52:43.660448 2671 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 17:52:43.667642 kubelet[2671]: I0317 17:52:43.667618 2671 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 17:52:43.667772 kubelet[2671]: I0317 17:52:43.667710 2671 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 17:52:43.675868 kubelet[2671]: I0317 17:52:43.675821 2671 topology_manager.go:215] "Topology Admit Handler" podUID="4b165de2c7352244d77c97a8b00e0161" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 17:52:43.675962 kubelet[2671]: I0317 17:52:43.675909 2671 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 17:52:43.675986 kubelet[2671]: I0317 17:52:43.675963 2671 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 17:52:43.858135 kubelet[2671]: I0317 17:52:43.857987 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:43.858135 kubelet[2671]: I0317 17:52:43.858020 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:43.858135 kubelet[2671]: I0317 17:52:43.858041 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 17:52:43.858135 kubelet[2671]: I0317 17:52:43.858058 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4b165de2c7352244d77c97a8b00e0161-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b165de2c7352244d77c97a8b00e0161\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:52:43.858135 kubelet[2671]: I0317 17:52:43.858079 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:43.858371 kubelet[2671]: I0317 17:52:43.858099 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:43.858371 kubelet[2671]: I0317 17:52:43.858119 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 17:52:43.858371 kubelet[2671]: I0317 17:52:43.858140 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4b165de2c7352244d77c97a8b00e0161-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4b165de2c7352244d77c97a8b00e0161\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:52:43.858371 kubelet[2671]: I0317 17:52:43.858177 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4b165de2c7352244d77c97a8b00e0161-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4b165de2c7352244d77c97a8b00e0161\") " pod="kube-system/kube-apiserver-localhost" Mar 17 17:52:43.901497 kubelet[2671]: E0317 17:52:43.901375 2671 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 17:52:44.174064 kubelet[2671]: E0317 17:52:44.173946 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:44.174064 kubelet[2671]: E0317 17:52:44.173965 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:44.202269 kubelet[2671]: E0317 17:52:44.202089 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:44.547409 kubelet[2671]: I0317 17:52:44.547290 2671 apiserver.go:52] "Watching apiserver" Mar 17 17:52:44.556883 kubelet[2671]: I0317 17:52:44.556850 2671 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:52:44.583311 kubelet[2671]: E0317 17:52:44.583264 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:44.584767 kubelet[2671]: E0317 17:52:44.584099 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:44.584767 kubelet[2671]: E0317 17:52:44.584572 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:44.658296 kubelet[2671]: I0317 17:52:44.658222 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6581831230000001 podStartE2EDuration="1.658183123s" podCreationTimestamp="2025-03-17 17:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:44.643553186 +0000 UTC m=+1.159558592" watchObservedRunningTime="2025-03-17 17:52:44.658183123 +0000 UTC m=+1.174188530" Mar 17 17:52:44.658468 kubelet[2671]: I0317 17:52:44.658365 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6583445430000001 podStartE2EDuration="1.658344543s" podCreationTimestamp="2025-03-17 17:52:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:44.658132877 +0000 UTC m=+1.174138283" watchObservedRunningTime="2025-03-17 17:52:44.658344543 +0000 UTC m=+1.174349949" Mar 17 17:52:45.585666 kubelet[2671]: E0317 17:52:45.584832 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:46.586397 kubelet[2671]: E0317 17:52:46.586356 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:48.280402 sudo[1673]: pam_unix(sudo:session): session closed for user root Mar 17 17:52:48.281925 sshd[1672]: Connection closed by 10.0.0.1 port 56640 Mar 17 17:52:48.282432 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Mar 17 17:52:48.287056 systemd[1]: sshd@6-10.0.0.117:22-10.0.0.1:56640.service: Deactivated successfully. Mar 17 17:52:48.289112 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:52:48.289324 systemd[1]: session-7.scope: Consumed 5.251s CPU time, 191.0M memory peak, 0B memory swap peak. Mar 17 17:52:48.289810 systemd-logind[1472]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:52:48.290741 systemd-logind[1472]: Removed session 7. Mar 17 17:52:48.434867 kubelet[2671]: E0317 17:52:48.434823 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:48.588514 kubelet[2671]: E0317 17:52:48.588391 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:48.622330 kubelet[2671]: I0317 17:52:48.622269 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=7.622232584 podStartE2EDuration="7.622232584s" podCreationTimestamp="2025-03-17 17:52:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:44.67038083 +0000 UTC m=+1.186386236" watchObservedRunningTime="2025-03-17 17:52:48.622232584 +0000 UTC m=+5.138237990" Mar 17 17:52:49.176833 kubelet[2671]: E0317 17:52:49.176600 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:49.590329 kubelet[2671]: E0317 17:52:49.590194 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:53.437697 update_engine[1475]: I20250317 17:52:53.437546 1475 update_attempter.cc:509] Updating boot flags... Mar 17 17:52:53.491805 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2765) Mar 17 17:52:53.534399 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2768) Mar 17 17:52:55.991902 kubelet[2671]: E0317 17:52:55.991787 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:56.545387 kubelet[2671]: I0317 17:52:56.545349 2671 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 17:52:56.545687 containerd[1488]: time="2025-03-17T17:52:56.545652475Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:52:56.546077 kubelet[2671]: I0317 17:52:56.545870 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 17:52:56.601945 kubelet[2671]: E0317 17:52:56.601798 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:57.257540 kubelet[2671]: I0317 17:52:57.257476 2671 topology_manager.go:215] "Topology Admit Handler" podUID="d79f9af9-c314-4544-8ac0-d74138f20176" podNamespace="kube-system" podName="kube-proxy-dxtbp" Mar 17 17:52:57.270927 systemd[1]: Created slice kubepods-besteffort-podd79f9af9_c314_4544_8ac0_d74138f20176.slice - libcontainer container kubepods-besteffort-podd79f9af9_c314_4544_8ac0_d74138f20176.slice. Mar 17 17:52:57.284719 kubelet[2671]: I0317 17:52:57.284675 2671 topology_manager.go:215] "Topology Admit Handler" podUID="5a8bf047-afab-46ef-844d-de775003851a" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-86x6k" Mar 17 17:52:57.294526 systemd[1]: Created slice kubepods-besteffort-pod5a8bf047_afab_46ef_844d_de775003851a.slice - libcontainer container kubepods-besteffort-pod5a8bf047_afab_46ef_844d_de775003851a.slice. Mar 17 17:52:57.435952 kubelet[2671]: I0317 17:52:57.435892 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d79f9af9-c314-4544-8ac0-d74138f20176-kube-proxy\") pod \"kube-proxy-dxtbp\" (UID: \"d79f9af9-c314-4544-8ac0-d74138f20176\") " pod="kube-system/kube-proxy-dxtbp" Mar 17 17:52:57.435952 kubelet[2671]: I0317 17:52:57.435943 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d79f9af9-c314-4544-8ac0-d74138f20176-xtables-lock\") pod \"kube-proxy-dxtbp\" (UID: \"d79f9af9-c314-4544-8ac0-d74138f20176\") " pod="kube-system/kube-proxy-dxtbp" Mar 17 17:52:57.435952 kubelet[2671]: I0317 17:52:57.435957 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d79f9af9-c314-4544-8ac0-d74138f20176-lib-modules\") pod \"kube-proxy-dxtbp\" (UID: \"d79f9af9-c314-4544-8ac0-d74138f20176\") " pod="kube-system/kube-proxy-dxtbp" Mar 17 17:52:57.436209 kubelet[2671]: I0317 17:52:57.435976 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92dvm\" (UniqueName: \"kubernetes.io/projected/5a8bf047-afab-46ef-844d-de775003851a-kube-api-access-92dvm\") pod \"tigera-operator-6479d6dc54-86x6k\" (UID: \"5a8bf047-afab-46ef-844d-de775003851a\") " pod="tigera-operator/tigera-operator-6479d6dc54-86x6k" Mar 17 17:52:57.436209 kubelet[2671]: I0317 17:52:57.436002 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kchxt\" (UniqueName: \"kubernetes.io/projected/d79f9af9-c314-4544-8ac0-d74138f20176-kube-api-access-kchxt\") pod \"kube-proxy-dxtbp\" (UID: \"d79f9af9-c314-4544-8ac0-d74138f20176\") " pod="kube-system/kube-proxy-dxtbp" Mar 17 17:52:57.436209 kubelet[2671]: I0317 17:52:57.436017 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5a8bf047-afab-46ef-844d-de775003851a-var-lib-calico\") pod \"tigera-operator-6479d6dc54-86x6k\" (UID: \"5a8bf047-afab-46ef-844d-de775003851a\") " pod="tigera-operator/tigera-operator-6479d6dc54-86x6k" Mar 17 17:52:57.583414 kubelet[2671]: E0317 17:52:57.583285 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:57.583997 containerd[1488]: time="2025-03-17T17:52:57.583936792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxtbp,Uid:d79f9af9-c314-4544-8ac0-d74138f20176,Namespace:kube-system,Attempt:0,}" Mar 17 17:52:57.599081 containerd[1488]: time="2025-03-17T17:52:57.599042766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-86x6k,Uid:5a8bf047-afab-46ef-844d-de775003851a,Namespace:tigera-operator,Attempt:0,}" Mar 17 17:52:57.616509 containerd[1488]: time="2025-03-17T17:52:57.616211956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:57.616509 containerd[1488]: time="2025-03-17T17:52:57.616271519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:57.616509 containerd[1488]: time="2025-03-17T17:52:57.616284113Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:57.616509 containerd[1488]: time="2025-03-17T17:52:57.616371088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:57.628111 containerd[1488]: time="2025-03-17T17:52:57.627992045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:52:57.629145 containerd[1488]: time="2025-03-17T17:52:57.628939107Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:52:57.629290 containerd[1488]: time="2025-03-17T17:52:57.629226863Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:57.629538 containerd[1488]: time="2025-03-17T17:52:57.629472898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:52:57.640906 systemd[1]: Started cri-containerd-e1ee2d678861facd94105c129e9daca275dfeaae4a4dd32be9331588256cfc8b.scope - libcontainer container e1ee2d678861facd94105c129e9daca275dfeaae4a4dd32be9331588256cfc8b. Mar 17 17:52:57.646865 systemd[1]: Started cri-containerd-d58f6c54e7ea1bad9ab490e699063c6e95d8e5c4439869ee3f4bd0dbc5655efc.scope - libcontainer container d58f6c54e7ea1bad9ab490e699063c6e95d8e5c4439869ee3f4bd0dbc5655efc. Mar 17 17:52:57.666189 containerd[1488]: time="2025-03-17T17:52:57.666148232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxtbp,Uid:d79f9af9-c314-4544-8ac0-d74138f20176,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1ee2d678861facd94105c129e9daca275dfeaae4a4dd32be9331588256cfc8b\"" Mar 17 17:52:57.666973 kubelet[2671]: E0317 17:52:57.666945 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:57.669238 containerd[1488]: time="2025-03-17T17:52:57.669190762Z" level=info msg="CreateContainer within sandbox \"e1ee2d678861facd94105c129e9daca275dfeaae4a4dd32be9331588256cfc8b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:52:57.686892 containerd[1488]: time="2025-03-17T17:52:57.686815363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-86x6k,Uid:5a8bf047-afab-46ef-844d-de775003851a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d58f6c54e7ea1bad9ab490e699063c6e95d8e5c4439869ee3f4bd0dbc5655efc\"" Mar 17 17:52:57.688560 containerd[1488]: time="2025-03-17T17:52:57.688526863Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 17 17:52:57.694913 containerd[1488]: time="2025-03-17T17:52:57.694857166Z" level=info msg="CreateContainer within sandbox \"e1ee2d678861facd94105c129e9daca275dfeaae4a4dd32be9331588256cfc8b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f38b56ec9b6e525ecf7be97d12888c0f80995ebba7c2ee28dd17f689a18919e\"" Mar 17 17:52:57.695530 containerd[1488]: time="2025-03-17T17:52:57.695445640Z" level=info msg="StartContainer for \"2f38b56ec9b6e525ecf7be97d12888c0f80995ebba7c2ee28dd17f689a18919e\"" Mar 17 17:52:57.728940 systemd[1]: Started cri-containerd-2f38b56ec9b6e525ecf7be97d12888c0f80995ebba7c2ee28dd17f689a18919e.scope - libcontainer container 2f38b56ec9b6e525ecf7be97d12888c0f80995ebba7c2ee28dd17f689a18919e. Mar 17 17:52:57.761856 containerd[1488]: time="2025-03-17T17:52:57.761808741Z" level=info msg="StartContainer for \"2f38b56ec9b6e525ecf7be97d12888c0f80995ebba7c2ee28dd17f689a18919e\" returns successfully" Mar 17 17:52:58.612777 kubelet[2671]: E0317 17:52:58.612703 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:52:59.361779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2241911737.mount: Deactivated successfully. Mar 17 17:52:59.765137 containerd[1488]: time="2025-03-17T17:52:59.765077805Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:59.765892 containerd[1488]: time="2025-03-17T17:52:59.765820981Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 17 17:52:59.766937 containerd[1488]: time="2025-03-17T17:52:59.766898989Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:59.769085 containerd[1488]: time="2025-03-17T17:52:59.769047852Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:52:59.769709 containerd[1488]: time="2025-03-17T17:52:59.769665329Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 2.081112757s" Mar 17 17:52:59.769769 containerd[1488]: time="2025-03-17T17:52:59.769707869Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 17 17:52:59.771704 containerd[1488]: time="2025-03-17T17:52:59.771638130Z" level=info msg="CreateContainer within sandbox \"d58f6c54e7ea1bad9ab490e699063c6e95d8e5c4439869ee3f4bd0dbc5655efc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 17:52:59.783680 containerd[1488]: time="2025-03-17T17:52:59.783609324Z" level=info msg="CreateContainer within sandbox \"d58f6c54e7ea1bad9ab490e699063c6e95d8e5c4439869ee3f4bd0dbc5655efc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f98a012e2f37d99c4ded2ae61e568f50f247efa49afa3ee468abdc28c1806021\"" Mar 17 17:52:59.784217 containerd[1488]: time="2025-03-17T17:52:59.784170715Z" level=info msg="StartContainer for \"f98a012e2f37d99c4ded2ae61e568f50f247efa49afa3ee468abdc28c1806021\"" Mar 17 17:52:59.817986 systemd[1]: Started cri-containerd-f98a012e2f37d99c4ded2ae61e568f50f247efa49afa3ee468abdc28c1806021.scope - libcontainer container f98a012e2f37d99c4ded2ae61e568f50f247efa49afa3ee468abdc28c1806021. Mar 17 17:52:59.849898 containerd[1488]: time="2025-03-17T17:52:59.849851893Z" level=info msg="StartContainer for \"f98a012e2f37d99c4ded2ae61e568f50f247efa49afa3ee468abdc28c1806021\" returns successfully" Mar 17 17:53:00.626325 kubelet[2671]: I0317 17:53:00.626264 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dxtbp" podStartSLOduration=3.6262444609999998 podStartE2EDuration="3.626244461s" podCreationTimestamp="2025-03-17 17:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:52:58.623086594 +0000 UTC m=+15.139092021" watchObservedRunningTime="2025-03-17 17:53:00.626244461 +0000 UTC m=+17.142249867" Mar 17 17:53:02.752125 kubelet[2671]: I0317 17:53:02.752050 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-86x6k" podStartSLOduration=3.669576309 podStartE2EDuration="5.752028459s" podCreationTimestamp="2025-03-17 17:52:57 +0000 UTC" firstStartedPulling="2025-03-17 17:52:57.688026095 +0000 UTC m=+14.204031502" lastFinishedPulling="2025-03-17 17:52:59.770478246 +0000 UTC m=+16.286483652" observedRunningTime="2025-03-17 17:53:00.626442334 +0000 UTC m=+17.142447740" watchObservedRunningTime="2025-03-17 17:53:02.752028459 +0000 UTC m=+19.268033865" Mar 17 17:53:02.752957 kubelet[2671]: I0317 17:53:02.752215 2671 topology_manager.go:215] "Topology Admit Handler" podUID="acba2208-af20-4360-a31e-1673513b8d61" podNamespace="calico-system" podName="calico-typha-8c96cd685-4g6w9" Mar 17 17:53:02.762444 systemd[1]: Created slice kubepods-besteffort-podacba2208_af20_4360_a31e_1673513b8d61.slice - libcontainer container kubepods-besteffort-podacba2208_af20_4360_a31e_1673513b8d61.slice. Mar 17 17:53:02.770021 kubelet[2671]: I0317 17:53:02.766986 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acba2208-af20-4360-a31e-1673513b8d61-tigera-ca-bundle\") pod \"calico-typha-8c96cd685-4g6w9\" (UID: \"acba2208-af20-4360-a31e-1673513b8d61\") " pod="calico-system/calico-typha-8c96cd685-4g6w9" Mar 17 17:53:02.770021 kubelet[2671]: I0317 17:53:02.767027 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/acba2208-af20-4360-a31e-1673513b8d61-typha-certs\") pod \"calico-typha-8c96cd685-4g6w9\" (UID: \"acba2208-af20-4360-a31e-1673513b8d61\") " pod="calico-system/calico-typha-8c96cd685-4g6w9" Mar 17 17:53:02.770021 kubelet[2671]: I0317 17:53:02.767048 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85t6l\" (UniqueName: \"kubernetes.io/projected/acba2208-af20-4360-a31e-1673513b8d61-kube-api-access-85t6l\") pod \"calico-typha-8c96cd685-4g6w9\" (UID: \"acba2208-af20-4360-a31e-1673513b8d61\") " pod="calico-system/calico-typha-8c96cd685-4g6w9" Mar 17 17:53:02.886084 kubelet[2671]: I0317 17:53:02.885901 2671 topology_manager.go:215] "Topology Admit Handler" podUID="9064a20d-68ba-4e3a-aba5-a7850f0062ee" podNamespace="calico-system" podName="calico-node-m8g62" Mar 17 17:53:02.894915 systemd[1]: Created slice kubepods-besteffort-pod9064a20d_68ba_4e3a_aba5_a7850f0062ee.slice - libcontainer container kubepods-besteffort-pod9064a20d_68ba_4e3a_aba5_a7850f0062ee.slice. Mar 17 17:53:03.002352 kubelet[2671]: I0317 17:53:03.002193 2671 topology_manager.go:215] "Topology Admit Handler" podUID="0a462e4e-6356-4301-958a-de536414d963" podNamespace="calico-system" podName="csi-node-driver-qsx6f" Mar 17 17:53:03.002519 kubelet[2671]: E0317 17:53:03.002491 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:03.067287 kubelet[2671]: E0317 17:53:03.067254 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:03.068630 containerd[1488]: time="2025-03-17T17:53:03.068153665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c96cd685-4g6w9,Uid:acba2208-af20-4360-a31e-1673513b8d61,Namespace:calico-system,Attempt:0,}" Mar 17 17:53:03.069590 kubelet[2671]: I0317 17:53:03.069548 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9064a20d-68ba-4e3a-aba5-a7850f0062ee-node-certs\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069641 kubelet[2671]: I0317 17:53:03.069601 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-var-lib-calico\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069641 kubelet[2671]: I0317 17:53:03.069629 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-flexvol-driver-host\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069703 kubelet[2671]: I0317 17:53:03.069654 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9064a20d-68ba-4e3a-aba5-a7850f0062ee-tigera-ca-bundle\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069703 kubelet[2671]: I0317 17:53:03.069679 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-var-run-calico\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069768 kubelet[2671]: I0317 17:53:03.069708 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-cni-bin-dir\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069768 kubelet[2671]: I0317 17:53:03.069729 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-lib-modules\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069818 kubelet[2671]: I0317 17:53:03.069767 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-cni-log-dir\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069818 kubelet[2671]: I0317 17:53:03.069794 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jmc\" (UniqueName: \"kubernetes.io/projected/9064a20d-68ba-4e3a-aba5-a7850f0062ee-kube-api-access-v2jmc\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069818 kubelet[2671]: I0317 17:53:03.069813 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-policysync\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069900 kubelet[2671]: I0317 17:53:03.069836 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-cni-net-dir\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.069900 kubelet[2671]: I0317 17:53:03.069856 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9064a20d-68ba-4e3a-aba5-a7850f0062ee-xtables-lock\") pod \"calico-node-m8g62\" (UID: \"9064a20d-68ba-4e3a-aba5-a7850f0062ee\") " pod="calico-system/calico-node-m8g62" Mar 17 17:53:03.171115 kubelet[2671]: I0317 17:53:03.170713 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkh26\" (UniqueName: \"kubernetes.io/projected/0a462e4e-6356-4301-958a-de536414d963-kube-api-access-qkh26\") pod \"csi-node-driver-qsx6f\" (UID: \"0a462e4e-6356-4301-958a-de536414d963\") " pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:03.171115 kubelet[2671]: I0317 17:53:03.170828 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a462e4e-6356-4301-958a-de536414d963-kubelet-dir\") pod \"csi-node-driver-qsx6f\" (UID: \"0a462e4e-6356-4301-958a-de536414d963\") " pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:03.171115 kubelet[2671]: I0317 17:53:03.170878 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a462e4e-6356-4301-958a-de536414d963-socket-dir\") pod \"csi-node-driver-qsx6f\" (UID: \"0a462e4e-6356-4301-958a-de536414d963\") " pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:03.171115 kubelet[2671]: I0317 17:53:03.170918 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0a462e4e-6356-4301-958a-de536414d963-varrun\") pod \"csi-node-driver-qsx6f\" (UID: \"0a462e4e-6356-4301-958a-de536414d963\") " pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:03.171115 kubelet[2671]: I0317 17:53:03.170937 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a462e4e-6356-4301-958a-de536414d963-registration-dir\") pod \"csi-node-driver-qsx6f\" (UID: \"0a462e4e-6356-4301-958a-de536414d963\") " pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:03.177589 kubelet[2671]: E0317 17:53:03.177503 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.177589 kubelet[2671]: W0317 17:53:03.177557 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.177589 kubelet[2671]: E0317 17:53:03.177586 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.181653 kubelet[2671]: E0317 17:53:03.181576 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.181653 kubelet[2671]: W0317 17:53:03.181596 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.181653 kubelet[2671]: E0317 17:53:03.181616 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.184343 containerd[1488]: time="2025-03-17T17:53:03.184142236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:03.184343 containerd[1488]: time="2025-03-17T17:53:03.184215303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:03.184343 containerd[1488]: time="2025-03-17T17:53:03.184226124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.184550 containerd[1488]: time="2025-03-17T17:53:03.184448513Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.197811 kubelet[2671]: E0317 17:53:03.197778 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:03.198973 containerd[1488]: time="2025-03-17T17:53:03.198928129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8g62,Uid:9064a20d-68ba-4e3a-aba5-a7850f0062ee,Namespace:calico-system,Attempt:0,}" Mar 17 17:53:03.204947 systemd[1]: Started cri-containerd-853c00ee6701cbe612e6cd3566375cf1d02d7f4a3fa095f3d213054acf4e082c.scope - libcontainer container 853c00ee6701cbe612e6cd3566375cf1d02d7f4a3fa095f3d213054acf4e082c. Mar 17 17:53:03.225871 containerd[1488]: time="2025-03-17T17:53:03.225664325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:03.225871 containerd[1488]: time="2025-03-17T17:53:03.225816422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:03.225871 containerd[1488]: time="2025-03-17T17:53:03.225831119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.226076 containerd[1488]: time="2025-03-17T17:53:03.226009045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:03.246733 systemd[1]: Started cri-containerd-a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82.scope - libcontainer container a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82. Mar 17 17:53:03.253619 containerd[1488]: time="2025-03-17T17:53:03.253440463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8c96cd685-4g6w9,Uid:acba2208-af20-4360-a31e-1673513b8d61,Namespace:calico-system,Attempt:0,} returns sandbox id \"853c00ee6701cbe612e6cd3566375cf1d02d7f4a3fa095f3d213054acf4e082c\"" Mar 17 17:53:03.254428 kubelet[2671]: E0317 17:53:03.254233 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:03.255673 containerd[1488]: time="2025-03-17T17:53:03.255489069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 17 17:53:03.271467 kubelet[2671]: E0317 17:53:03.271439 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.271725 kubelet[2671]: W0317 17:53:03.271617 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.271725 kubelet[2671]: E0317 17:53:03.271644 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.271928 containerd[1488]: time="2025-03-17T17:53:03.271866627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m8g62,Uid:9064a20d-68ba-4e3a-aba5-a7850f0062ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82\"" Mar 17 17:53:03.272332 kubelet[2671]: E0317 17:53:03.272188 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.272332 kubelet[2671]: W0317 17:53:03.272201 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.272332 kubelet[2671]: E0317 17:53:03.272211 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.272652 kubelet[2671]: E0317 17:53:03.272593 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.272829 kubelet[2671]: W0317 17:53:03.272705 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.272829 kubelet[2671]: E0317 17:53:03.272716 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.272829 kubelet[2671]: E0317 17:53:03.272656 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:03.272967 kubelet[2671]: E0317 17:53:03.272955 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.273126 kubelet[2671]: W0317 17:53:03.273034 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.273126 kubelet[2671]: E0317 17:53:03.273047 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.273558 kubelet[2671]: E0317 17:53:03.273470 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.273558 kubelet[2671]: W0317 17:53:03.273481 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.273558 kubelet[2671]: E0317 17:53:03.273502 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.273975 kubelet[2671]: E0317 17:53:03.273934 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.273975 kubelet[2671]: W0317 17:53:03.273945 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.274175 kubelet[2671]: E0317 17:53:03.274163 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.274518 kubelet[2671]: E0317 17:53:03.274480 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.274678 kubelet[2671]: W0317 17:53:03.274494 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.274892 kubelet[2671]: E0317 17:53:03.274770 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.275035 kubelet[2671]: E0317 17:53:03.275022 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.275231 kubelet[2671]: W0317 17:53:03.275069 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.275231 kubelet[2671]: E0317 17:53:03.275086 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.275436 kubelet[2671]: E0317 17:53:03.275363 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.275436 kubelet[2671]: W0317 17:53:03.275421 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.275694 kubelet[2671]: E0317 17:53:03.275548 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.276104 kubelet[2671]: E0317 17:53:03.275922 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.276104 kubelet[2671]: W0317 17:53:03.275934 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.276104 kubelet[2671]: E0317 17:53:03.275962 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.276359 kubelet[2671]: E0317 17:53:03.276345 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.276444 kubelet[2671]: W0317 17:53:03.276430 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.276915 kubelet[2671]: E0317 17:53:03.276879 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.277326 kubelet[2671]: E0317 17:53:03.277263 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.277326 kubelet[2671]: W0317 17:53:03.277275 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.277503 kubelet[2671]: E0317 17:53:03.277429 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.277964 kubelet[2671]: E0317 17:53:03.277912 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.277964 kubelet[2671]: W0317 17:53:03.277925 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.278198 kubelet[2671]: E0317 17:53:03.277945 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.278336 kubelet[2671]: E0317 17:53:03.278325 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.278524 kubelet[2671]: W0317 17:53:03.278390 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.278524 kubelet[2671]: E0317 17:53:03.278407 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.278719 kubelet[2671]: E0317 17:53:03.278707 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.278804 kubelet[2671]: W0317 17:53:03.278791 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.278976 kubelet[2671]: E0317 17:53:03.278891 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.279152 kubelet[2671]: E0317 17:53:03.279120 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.279229 kubelet[2671]: W0317 17:53:03.279212 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.279432 kubelet[2671]: E0317 17:53:03.279332 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.279561 kubelet[2671]: E0317 17:53:03.279549 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.279660 kubelet[2671]: W0317 17:53:03.279611 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.279847 kubelet[2671]: E0317 17:53:03.279716 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.280046 kubelet[2671]: E0317 17:53:03.280035 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.280210 kubelet[2671]: W0317 17:53:03.280095 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.280210 kubelet[2671]: E0317 17:53:03.280112 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.280370 kubelet[2671]: E0317 17:53:03.280359 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.280424 kubelet[2671]: W0317 17:53:03.280414 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.280576 kubelet[2671]: E0317 17:53:03.280477 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.280818 kubelet[2671]: E0317 17:53:03.280806 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.280875 kubelet[2671]: W0317 17:53:03.280865 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.280975 kubelet[2671]: E0317 17:53:03.280954 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.281315 kubelet[2671]: E0317 17:53:03.281224 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.281315 kubelet[2671]: W0317 17:53:03.281234 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.281315 kubelet[2671]: E0317 17:53:03.281258 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.281475 kubelet[2671]: E0317 17:53:03.281464 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.281533 kubelet[2671]: W0317 17:53:03.281522 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.281690 kubelet[2671]: E0317 17:53:03.281599 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.282243 kubelet[2671]: E0317 17:53:03.282106 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.282243 kubelet[2671]: W0317 17:53:03.282119 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.282243 kubelet[2671]: E0317 17:53:03.282131 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.282456 kubelet[2671]: E0317 17:53:03.282397 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.282517 kubelet[2671]: W0317 17:53:03.282506 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.282603 kubelet[2671]: E0317 17:53:03.282586 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.282967 kubelet[2671]: E0317 17:53:03.282923 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.282967 kubelet[2671]: W0317 17:53:03.282936 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.282967 kubelet[2671]: E0317 17:53:03.282946 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:03.283237 kubelet[2671]: E0317 17:53:03.283214 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:03.283276 kubelet[2671]: W0317 17:53:03.283237 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:03.283276 kubelet[2671]: E0317 17:53:03.283256 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:04.576187 kubelet[2671]: E0317 17:53:04.576102 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:06.576019 kubelet[2671]: E0317 17:53:06.575964 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:06.804590 containerd[1488]: time="2025-03-17T17:53:06.804529864Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:06.805497 containerd[1488]: time="2025-03-17T17:53:06.805460999Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 17 17:53:06.806657 containerd[1488]: time="2025-03-17T17:53:06.806626657Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:06.808691 containerd[1488]: time="2025-03-17T17:53:06.808660209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:06.809265 containerd[1488]: time="2025-03-17T17:53:06.809244291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 3.553723662s" Mar 17 17:53:06.809326 containerd[1488]: time="2025-03-17T17:53:06.809268486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 17 17:53:06.812783 containerd[1488]: time="2025-03-17T17:53:06.812723619Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:53:06.836917 containerd[1488]: time="2025-03-17T17:53:06.836806186Z" level=info msg="CreateContainer within sandbox \"853c00ee6701cbe612e6cd3566375cf1d02d7f4a3fa095f3d213054acf4e082c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 17:53:06.855361 containerd[1488]: time="2025-03-17T17:53:06.855320576Z" level=info msg="CreateContainer within sandbox \"853c00ee6701cbe612e6cd3566375cf1d02d7f4a3fa095f3d213054acf4e082c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"39751a3ce084039c8eab34e9e52d50f37b5b46cc191fb59efde03212c7099300\"" Mar 17 17:53:06.858684 containerd[1488]: time="2025-03-17T17:53:06.858658859Z" level=info msg="StartContainer for \"39751a3ce084039c8eab34e9e52d50f37b5b46cc191fb59efde03212c7099300\"" Mar 17 17:53:06.890926 systemd[1]: Started cri-containerd-39751a3ce084039c8eab34e9e52d50f37b5b46cc191fb59efde03212c7099300.scope - libcontainer container 39751a3ce084039c8eab34e9e52d50f37b5b46cc191fb59efde03212c7099300. Mar 17 17:53:07.046883 containerd[1488]: time="2025-03-17T17:53:07.046835409Z" level=info msg="StartContainer for \"39751a3ce084039c8eab34e9e52d50f37b5b46cc191fb59efde03212c7099300\" returns successfully" Mar 17 17:53:07.631064 kubelet[2671]: E0317 17:53:07.631026 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:07.656548 kubelet[2671]: I0317 17:53:07.656288 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-8c96cd685-4g6w9" podStartSLOduration=2.098982248 podStartE2EDuration="5.656274298s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="2025-03-17 17:53:03.255286456 +0000 UTC m=+19.771291862" lastFinishedPulling="2025-03-17 17:53:06.812578506 +0000 UTC m=+23.328583912" observedRunningTime="2025-03-17 17:53:07.656218784 +0000 UTC m=+24.172224200" watchObservedRunningTime="2025-03-17 17:53:07.656274298 +0000 UTC m=+24.172279704" Mar 17 17:53:07.705164 kubelet[2671]: E0317 17:53:07.705116 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.705164 kubelet[2671]: W0317 17:53:07.705140 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.705164 kubelet[2671]: E0317 17:53:07.705160 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.705375 kubelet[2671]: E0317 17:53:07.705336 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.705375 kubelet[2671]: W0317 17:53:07.705342 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.705375 kubelet[2671]: E0317 17:53:07.705350 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.705592 kubelet[2671]: E0317 17:53:07.705566 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.705592 kubelet[2671]: W0317 17:53:07.705578 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.705592 kubelet[2671]: E0317 17:53:07.705587 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.705820 kubelet[2671]: E0317 17:53:07.705801 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.705820 kubelet[2671]: W0317 17:53:07.705811 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.705820 kubelet[2671]: E0317 17:53:07.705819 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.706047 kubelet[2671]: E0317 17:53:07.706018 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.706047 kubelet[2671]: W0317 17:53:07.706031 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.706047 kubelet[2671]: E0317 17:53:07.706039 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.706211 kubelet[2671]: E0317 17:53:07.706196 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.706211 kubelet[2671]: W0317 17:53:07.706204 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.706259 kubelet[2671]: E0317 17:53:07.706211 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.706377 kubelet[2671]: E0317 17:53:07.706363 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.706377 kubelet[2671]: W0317 17:53:07.706372 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.706423 kubelet[2671]: E0317 17:53:07.706379 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.706593 kubelet[2671]: E0317 17:53:07.706562 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.706593 kubelet[2671]: W0317 17:53:07.706572 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.706593 kubelet[2671]: E0317 17:53:07.706580 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.706789 kubelet[2671]: E0317 17:53:07.706770 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.706789 kubelet[2671]: W0317 17:53:07.706781 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.706789 kubelet[2671]: E0317 17:53:07.706789 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.706992 kubelet[2671]: E0317 17:53:07.706974 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.706992 kubelet[2671]: W0317 17:53:07.706984 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.707056 kubelet[2671]: E0317 17:53:07.706993 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.707201 kubelet[2671]: E0317 17:53:07.707180 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.707201 kubelet[2671]: W0317 17:53:07.707190 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.707201 kubelet[2671]: E0317 17:53:07.707200 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.707403 kubelet[2671]: E0317 17:53:07.707388 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.707403 kubelet[2671]: W0317 17:53:07.707397 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.707451 kubelet[2671]: E0317 17:53:07.707405 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.707583 kubelet[2671]: E0317 17:53:07.707568 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.707583 kubelet[2671]: W0317 17:53:07.707578 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.707647 kubelet[2671]: E0317 17:53:07.707586 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.707807 kubelet[2671]: E0317 17:53:07.707794 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.707807 kubelet[2671]: W0317 17:53:07.707804 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.707857 kubelet[2671]: E0317 17:53:07.707814 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.707996 kubelet[2671]: E0317 17:53:07.707982 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.708027 kubelet[2671]: W0317 17:53:07.707995 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.708027 kubelet[2671]: E0317 17:53:07.708005 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.805746 kubelet[2671]: E0317 17:53:07.805690 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.805746 kubelet[2671]: W0317 17:53:07.805721 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.805746 kubelet[2671]: E0317 17:53:07.805774 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.806090 kubelet[2671]: E0317 17:53:07.806058 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.806090 kubelet[2671]: W0317 17:53:07.806075 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.806154 kubelet[2671]: E0317 17:53:07.806094 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.806395 kubelet[2671]: E0317 17:53:07.806362 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.806395 kubelet[2671]: W0317 17:53:07.806378 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.806395 kubelet[2671]: E0317 17:53:07.806395 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.806827 kubelet[2671]: E0317 17:53:07.806798 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.807008 kubelet[2671]: W0317 17:53:07.806826 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.807008 kubelet[2671]: E0317 17:53:07.806856 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.807086 kubelet[2671]: E0317 17:53:07.807066 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.807086 kubelet[2671]: W0317 17:53:07.807077 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.807163 kubelet[2671]: E0317 17:53:07.807093 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.807333 kubelet[2671]: E0317 17:53:07.807308 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.807333 kubelet[2671]: W0317 17:53:07.807320 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.807421 kubelet[2671]: E0317 17:53:07.807337 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.807683 kubelet[2671]: E0317 17:53:07.807648 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.807683 kubelet[2671]: W0317 17:53:07.807667 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.807776 kubelet[2671]: E0317 17:53:07.807686 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.808009 kubelet[2671]: E0317 17:53:07.807992 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.808009 kubelet[2671]: W0317 17:53:07.808005 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.808159 kubelet[2671]: E0317 17:53:07.808049 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.808219 kubelet[2671]: E0317 17:53:07.808199 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.808219 kubelet[2671]: W0317 17:53:07.808206 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.808308 kubelet[2671]: E0317 17:53:07.808283 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.808454 kubelet[2671]: E0317 17:53:07.808433 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.808454 kubelet[2671]: W0317 17:53:07.808449 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.808523 kubelet[2671]: E0317 17:53:07.808469 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.808737 kubelet[2671]: E0317 17:53:07.808723 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.808737 kubelet[2671]: W0317 17:53:07.808735 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.808971 kubelet[2671]: E0317 17:53:07.808761 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.809003 kubelet[2671]: E0317 17:53:07.808974 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.809003 kubelet[2671]: W0317 17:53:07.808982 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.809003 kubelet[2671]: E0317 17:53:07.808995 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.809370 kubelet[2671]: E0317 17:53:07.809354 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.809370 kubelet[2671]: W0317 17:53:07.809368 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.809467 kubelet[2671]: E0317 17:53:07.809384 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.809657 kubelet[2671]: E0317 17:53:07.809636 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.809657 kubelet[2671]: W0317 17:53:07.809649 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.809769 kubelet[2671]: E0317 17:53:07.809666 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.809982 kubelet[2671]: E0317 17:53:07.809948 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.809982 kubelet[2671]: W0317 17:53:07.809962 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.809982 kubelet[2671]: E0317 17:53:07.809991 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.810224 kubelet[2671]: E0317 17:53:07.810203 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.810224 kubelet[2671]: W0317 17:53:07.810220 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.810341 kubelet[2671]: E0317 17:53:07.810243 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.810432 kubelet[2671]: E0317 17:53:07.810413 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.810432 kubelet[2671]: W0317 17:53:07.810426 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.810503 kubelet[2671]: E0317 17:53:07.810440 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:07.810671 kubelet[2671]: E0317 17:53:07.810637 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:53:07.810671 kubelet[2671]: W0317 17:53:07.810651 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:53:07.810671 kubelet[2671]: E0317 17:53:07.810661 2671 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:53:08.282025 containerd[1488]: time="2025-03-17T17:53:08.281958455Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:08.282836 containerd[1488]: time="2025-03-17T17:53:08.282741681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 17 17:53:08.284026 containerd[1488]: time="2025-03-17T17:53:08.283983460Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:08.286148 containerd[1488]: time="2025-03-17T17:53:08.286064711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:08.286916 containerd[1488]: time="2025-03-17T17:53:08.286878033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.474127664s" Mar 17 17:53:08.286916 containerd[1488]: time="2025-03-17T17:53:08.286907789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:53:08.289636 containerd[1488]: time="2025-03-17T17:53:08.289558522Z" level=info msg="CreateContainer within sandbox \"a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:53:08.307716 containerd[1488]: time="2025-03-17T17:53:08.307661996Z" level=info msg="CreateContainer within sandbox \"a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03\"" Mar 17 17:53:08.308272 containerd[1488]: time="2025-03-17T17:53:08.308242751Z" level=info msg="StartContainer for \"e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03\"" Mar 17 17:53:08.341031 systemd[1]: Started cri-containerd-e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03.scope - libcontainer container e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03. Mar 17 17:53:08.378275 containerd[1488]: time="2025-03-17T17:53:08.378218061Z" level=info msg="StartContainer for \"e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03\" returns successfully" Mar 17 17:53:08.392370 systemd[1]: cri-containerd-e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03.scope: Deactivated successfully. Mar 17 17:53:08.576320 kubelet[2671]: E0317 17:53:08.576136 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:08.632672 kubelet[2671]: I0317 17:53:08.632635 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:53:08.633808 kubelet[2671]: E0317 17:53:08.632966 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:08.633808 kubelet[2671]: E0317 17:53:08.633343 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:08.733240 containerd[1488]: time="2025-03-17T17:53:08.733150940Z" level=info msg="shim disconnected" id=e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03 namespace=k8s.io Mar 17 17:53:08.733240 containerd[1488]: time="2025-03-17T17:53:08.733215281Z" level=warning msg="cleaning up after shim disconnected" id=e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03 namespace=k8s.io Mar 17 17:53:08.733240 containerd[1488]: time="2025-03-17T17:53:08.733227825Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:08.749579 containerd[1488]: time="2025-03-17T17:53:08.749503384Z" level=warning msg="cleanup warnings time=\"2025-03-17T17:53:08Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 17 17:53:08.825488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e158bc23d559b11d8d27aabb8a60bff38a575e59184cedcffc3c775e6ad47b03-rootfs.mount: Deactivated successfully. Mar 17 17:53:09.635784 kubelet[2671]: E0317 17:53:09.635720 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:09.636594 containerd[1488]: time="2025-03-17T17:53:09.636508390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:53:10.575902 kubelet[2671]: E0317 17:53:10.575835 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:12.575820 kubelet[2671]: E0317 17:53:12.575646 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:13.571871 containerd[1488]: time="2025-03-17T17:53:13.571723011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:13.573250 containerd[1488]: time="2025-03-17T17:53:13.573190913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:53:13.574714 containerd[1488]: time="2025-03-17T17:53:13.574676738Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:13.577591 containerd[1488]: time="2025-03-17T17:53:13.577507574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:13.580690 containerd[1488]: time="2025-03-17T17:53:13.579355241Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 3.942797236s" Mar 17 17:53:13.580690 containerd[1488]: time="2025-03-17T17:53:13.579387711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:53:13.582708 containerd[1488]: time="2025-03-17T17:53:13.582673143Z" level=info msg="CreateContainer within sandbox \"a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:53:13.600900 containerd[1488]: time="2025-03-17T17:53:13.600850021Z" level=info msg="CreateContainer within sandbox \"a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094\"" Mar 17 17:53:13.602311 containerd[1488]: time="2025-03-17T17:53:13.601457304Z" level=info msg="StartContainer for \"75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094\"" Mar 17 17:53:13.636931 systemd[1]: Started cri-containerd-75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094.scope - libcontainer container 75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094. Mar 17 17:53:13.676333 containerd[1488]: time="2025-03-17T17:53:13.676279823Z" level=info msg="StartContainer for \"75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094\" returns successfully" Mar 17 17:53:14.462931 kubelet[2671]: I0317 17:53:14.462484 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:53:14.463362 kubelet[2671]: E0317 17:53:14.463085 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:14.531109 systemd[1]: Started sshd@7-10.0.0.117:22-10.0.0.1:44270.service - OpenSSH per-connection server daemon (10.0.0.1:44270). Mar 17 17:53:14.575877 kubelet[2671]: E0317 17:53:14.575804 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:14.646035 kubelet[2671]: E0317 17:53:14.645994 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:14.646227 kubelet[2671]: E0317 17:53:14.646132 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:15.062654 sshd[3393]: Accepted publickey for core from 10.0.0.1 port 44270 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:15.065096 sshd-session[3393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:15.071830 systemd-logind[1472]: New session 8 of user core. Mar 17 17:53:15.081082 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:53:15.248801 sshd[3398]: Connection closed by 10.0.0.1 port 44270 Mar 17 17:53:15.249161 sshd-session[3393]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:15.253025 systemd[1]: sshd@7-10.0.0.117:22-10.0.0.1:44270.service: Deactivated successfully. Mar 17 17:53:15.255193 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:53:15.255928 systemd-logind[1472]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:53:15.257171 systemd-logind[1472]: Removed session 8. Mar 17 17:53:15.633770 containerd[1488]: time="2025-03-17T17:53:15.633682613Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Mar 17 17:53:15.636927 systemd[1]: cri-containerd-75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094.scope: Deactivated successfully. Mar 17 17:53:15.647460 kubelet[2671]: E0317 17:53:15.647409 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:15.661130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094-rootfs.mount: Deactivated successfully. Mar 17 17:53:15.714509 kubelet[2671]: I0317 17:53:15.714449 2671 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:53:16.220694 kubelet[2671]: I0317 17:53:16.220615 2671 topology_manager.go:215] "Topology Admit Handler" podUID="a4788825-63ac-4824-9f76-78d12b4feedf" podNamespace="kube-system" podName="coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:16.220898 kubelet[2671]: I0317 17:53:16.220860 2671 topology_manager.go:215] "Topology Admit Handler" podUID="ecb46531-5f4a-4235-a229-80c727efd6de" podNamespace="calico-system" podName="calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:16.221278 kubelet[2671]: I0317 17:53:16.221245 2671 topology_manager.go:215] "Topology Admit Handler" podUID="61c49c7a-fa99-4721-add6-729b5d3cd3c9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-48wgq" Mar 17 17:53:16.221555 kubelet[2671]: I0317 17:53:16.221344 2671 topology_manager.go:215] "Topology Admit Handler" podUID="1c0a54b3-f431-4a56-aa22-15462ea1a4c4" podNamespace="calico-apiserver" podName="calico-apiserver-69489b6559-dszht" Mar 17 17:53:16.221555 kubelet[2671]: I0317 17:53:16.221431 2671 topology_manager.go:215] "Topology Admit Handler" podUID="bd182690-c4b5-49c8-a36b-6df7c7b0c5b6" podNamespace="calico-apiserver" podName="calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:16.228667 systemd[1]: Created slice kubepods-burstable-poda4788825_63ac_4824_9f76_78d12b4feedf.slice - libcontainer container kubepods-burstable-poda4788825_63ac_4824_9f76_78d12b4feedf.slice. Mar 17 17:53:16.233641 systemd[1]: Created slice kubepods-besteffort-podecb46531_5f4a_4235_a229_80c727efd6de.slice - libcontainer container kubepods-besteffort-podecb46531_5f4a_4235_a229_80c727efd6de.slice. Mar 17 17:53:16.239202 systemd[1]: Created slice kubepods-besteffort-podbd182690_c4b5_49c8_a36b_6df7c7b0c5b6.slice - libcontainer container kubepods-besteffort-podbd182690_c4b5_49c8_a36b_6df7c7b0c5b6.slice. Mar 17 17:53:16.243022 systemd[1]: Created slice kubepods-besteffort-pod1c0a54b3_f431_4a56_aa22_15462ea1a4c4.slice - libcontainer container kubepods-besteffort-pod1c0a54b3_f431_4a56_aa22_15462ea1a4c4.slice. Mar 17 17:53:16.247840 systemd[1]: Created slice kubepods-burstable-pod61c49c7a_fa99_4721_add6_729b5d3cd3c9.slice - libcontainer container kubepods-burstable-pod61c49c7a_fa99_4721_add6_729b5d3cd3c9.slice. Mar 17 17:53:16.355080 containerd[1488]: time="2025-03-17T17:53:16.355014114Z" level=info msg="shim disconnected" id=75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094 namespace=k8s.io Mar 17 17:53:16.355080 containerd[1488]: time="2025-03-17T17:53:16.355069498Z" level=warning msg="cleaning up after shim disconnected" id=75eb7d5efd5b1c62b2a897e1f8dfa2d352134bd027dd6568484ec7230326e094 namespace=k8s.io Mar 17 17:53:16.355080 containerd[1488]: time="2025-03-17T17:53:16.355078064Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:53:16.362243 kubelet[2671]: I0317 17:53:16.362198 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbk8t\" (UniqueName: \"kubernetes.io/projected/a4788825-63ac-4824-9f76-78d12b4feedf-kube-api-access-xbk8t\") pod \"coredns-7db6d8ff4d-z8gdg\" (UID: \"a4788825-63ac-4824-9f76-78d12b4feedf\") " pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:16.362243 kubelet[2671]: I0317 17:53:16.362247 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a4788825-63ac-4824-9f76-78d12b4feedf-config-volume\") pod \"coredns-7db6d8ff4d-z8gdg\" (UID: \"a4788825-63ac-4824-9f76-78d12b4feedf\") " pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:16.362441 kubelet[2671]: I0317 17:53:16.362271 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1c0a54b3-f431-4a56-aa22-15462ea1a4c4-calico-apiserver-certs\") pod \"calico-apiserver-69489b6559-dszht\" (UID: \"1c0a54b3-f431-4a56-aa22-15462ea1a4c4\") " pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:16.362441 kubelet[2671]: I0317 17:53:16.362289 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95n55\" (UniqueName: \"kubernetes.io/projected/1c0a54b3-f431-4a56-aa22-15462ea1a4c4-kube-api-access-95n55\") pod \"calico-apiserver-69489b6559-dszht\" (UID: \"1c0a54b3-f431-4a56-aa22-15462ea1a4c4\") " pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:16.362441 kubelet[2671]: I0317 17:53:16.362303 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/61c49c7a-fa99-4721-add6-729b5d3cd3c9-config-volume\") pod \"coredns-7db6d8ff4d-48wgq\" (UID: \"61c49c7a-fa99-4721-add6-729b5d3cd3c9\") " pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:16.362441 kubelet[2671]: I0317 17:53:16.362319 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd182690-c4b5-49c8-a36b-6df7c7b0c5b6-calico-apiserver-certs\") pod \"calico-apiserver-69489b6559-jrsd6\" (UID: \"bd182690-c4b5-49c8-a36b-6df7c7b0c5b6\") " pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:16.362441 kubelet[2671]: I0317 17:53:16.362337 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecb46531-5f4a-4235-a229-80c727efd6de-tigera-ca-bundle\") pod \"calico-kube-controllers-766ddbd77b-gscbd\" (UID: \"ecb46531-5f4a-4235-a229-80c727efd6de\") " pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:16.362575 kubelet[2671]: I0317 17:53:16.362356 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnd87\" (UniqueName: \"kubernetes.io/projected/ecb46531-5f4a-4235-a229-80c727efd6de-kube-api-access-lnd87\") pod \"calico-kube-controllers-766ddbd77b-gscbd\" (UID: \"ecb46531-5f4a-4235-a229-80c727efd6de\") " pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:16.362575 kubelet[2671]: I0317 17:53:16.362371 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97bqm\" (UniqueName: \"kubernetes.io/projected/bd182690-c4b5-49c8-a36b-6df7c7b0c5b6-kube-api-access-97bqm\") pod \"calico-apiserver-69489b6559-jrsd6\" (UID: \"bd182690-c4b5-49c8-a36b-6df7c7b0c5b6\") " pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:16.362575 kubelet[2671]: I0317 17:53:16.362389 2671 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4x6xd\" (UniqueName: \"kubernetes.io/projected/61c49c7a-fa99-4721-add6-729b5d3cd3c9-kube-api-access-4x6xd\") pod \"coredns-7db6d8ff4d-48wgq\" (UID: \"61c49c7a-fa99-4721-add6-729b5d3cd3c9\") " pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:16.531821 kubelet[2671]: E0317 17:53:16.531349 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:16.532511 containerd[1488]: time="2025-03-17T17:53:16.532165254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:16.537090 containerd[1488]: time="2025-03-17T17:53:16.537046221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:0,}" Mar 17 17:53:16.542076 containerd[1488]: time="2025-03-17T17:53:16.542021325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:53:16.546824 containerd[1488]: time="2025-03-17T17:53:16.546762149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:0,}" Mar 17 17:53:16.550121 kubelet[2671]: E0317 17:53:16.550078 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:16.550626 containerd[1488]: time="2025-03-17T17:53:16.550582684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:0,}" Mar 17 17:53:16.583527 systemd[1]: Created slice kubepods-besteffort-pod0a462e4e_6356_4301_958a_de536414d963.slice - libcontainer container kubepods-besteffort-pod0a462e4e_6356_4301_958a_de536414d963.slice. Mar 17 17:53:16.587064 containerd[1488]: time="2025-03-17T17:53:16.586996391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:0,}" Mar 17 17:53:16.662781 kubelet[2671]: E0317 17:53:16.660524 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:16.664986 containerd[1488]: time="2025-03-17T17:53:16.664735320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:53:16.764776 containerd[1488]: time="2025-03-17T17:53:16.762247629Z" level=error msg="Failed to destroy network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.764776 containerd[1488]: time="2025-03-17T17:53:16.763374467Z" level=error msg="encountered an error cleaning up failed sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.764776 containerd[1488]: time="2025-03-17T17:53:16.763528267Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.764981 kubelet[2671]: E0317 17:53:16.764007 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.764981 kubelet[2671]: E0317 17:53:16.764095 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:16.764981 kubelet[2671]: E0317 17:53:16.764124 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:16.765160 kubelet[2671]: E0317 17:53:16.764194 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" podUID="ecb46531-5f4a-4235-a229-80c727efd6de" Mar 17 17:53:16.767345 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54-shm.mount: Deactivated successfully. Mar 17 17:53:16.768386 containerd[1488]: time="2025-03-17T17:53:16.768238453Z" level=error msg="Failed to destroy network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.768860 containerd[1488]: time="2025-03-17T17:53:16.768836307Z" level=error msg="encountered an error cleaning up failed sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.769046 containerd[1488]: time="2025-03-17T17:53:16.768958527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.769392 kubelet[2671]: E0317 17:53:16.769334 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.769454 kubelet[2671]: E0317 17:53:16.769415 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:16.769454 kubelet[2671]: E0317 17:53:16.769443 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:16.769594 kubelet[2671]: E0317 17:53:16.769488 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-z8gdg" podUID="a4788825-63ac-4824-9f76-78d12b4feedf" Mar 17 17:53:16.775034 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5-shm.mount: Deactivated successfully. Mar 17 17:53:16.778685 containerd[1488]: time="2025-03-17T17:53:16.778629041Z" level=error msg="Failed to destroy network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.781171 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071-shm.mount: Deactivated successfully. Mar 17 17:53:16.784651 containerd[1488]: time="2025-03-17T17:53:16.783989058Z" level=error msg="encountered an error cleaning up failed sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.784651 containerd[1488]: time="2025-03-17T17:53:16.784091411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.784810 kubelet[2671]: E0317 17:53:16.784363 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.784810 kubelet[2671]: E0317 17:53:16.784447 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:16.784810 kubelet[2671]: E0317 17:53:16.784472 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:16.784923 kubelet[2671]: E0317 17:53:16.784521 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" podUID="1c0a54b3-f431-4a56-aa22-15462ea1a4c4" Mar 17 17:53:16.787651 containerd[1488]: time="2025-03-17T17:53:16.787602783Z" level=error msg="Failed to destroy network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.788294 containerd[1488]: time="2025-03-17T17:53:16.788099698Z" level=error msg="encountered an error cleaning up failed sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.788294 containerd[1488]: time="2025-03-17T17:53:16.788162005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.788605 kubelet[2671]: E0317 17:53:16.788363 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.788605 kubelet[2671]: E0317 17:53:16.788463 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:16.788605 kubelet[2671]: E0317 17:53:16.788486 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:16.788923 kubelet[2671]: E0317 17:53:16.788852 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" podUID="bd182690-c4b5-49c8-a36b-6df7c7b0c5b6" Mar 17 17:53:16.799383 containerd[1488]: time="2025-03-17T17:53:16.799330164Z" level=error msg="Failed to destroy network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.799873 containerd[1488]: time="2025-03-17T17:53:16.799846074Z" level=error msg="encountered an error cleaning up failed sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.799975 containerd[1488]: time="2025-03-17T17:53:16.799938708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.800421 kubelet[2671]: E0317 17:53:16.800239 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.800489 kubelet[2671]: E0317 17:53:16.800431 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:16.800489 kubelet[2671]: E0317 17:53:16.800456 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:16.800565 kubelet[2671]: E0317 17:53:16.800502 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-48wgq" podUID="61c49c7a-fa99-4721-add6-729b5d3cd3c9" Mar 17 17:53:16.801906 containerd[1488]: time="2025-03-17T17:53:16.801857726Z" level=error msg="Failed to destroy network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.802270 containerd[1488]: time="2025-03-17T17:53:16.802220368Z" level=error msg="encountered an error cleaning up failed sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.802325 containerd[1488]: time="2025-03-17T17:53:16.802270171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.802529 kubelet[2671]: E0317 17:53:16.802485 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:16.802581 kubelet[2671]: E0317 17:53:16.802554 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:16.802581 kubelet[2671]: E0317 17:53:16.802575 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:16.802645 kubelet[2671]: E0317 17:53:16.802619 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:17.661690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16-shm.mount: Deactivated successfully. Mar 17 17:53:17.661858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9-shm.mount: Deactivated successfully. Mar 17 17:53:17.661967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8-shm.mount: Deactivated successfully. Mar 17 17:53:17.662795 kubelet[2671]: I0317 17:53:17.662731 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071" Mar 17 17:53:17.664602 kubelet[2671]: I0317 17:53:17.664559 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8" Mar 17 17:53:17.664694 containerd[1488]: time="2025-03-17T17:53:17.664601342Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" Mar 17 17:53:17.664924 containerd[1488]: time="2025-03-17T17:53:17.664884183Z" level=info msg="Ensure that sandbox d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071 in task-service has been cleanup successfully" Mar 17 17:53:17.665194 containerd[1488]: time="2025-03-17T17:53:17.665152358Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" Mar 17 17:53:17.665718 containerd[1488]: time="2025-03-17T17:53:17.665281090Z" level=info msg="TearDown network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" successfully" Mar 17 17:53:17.665718 containerd[1488]: time="2025-03-17T17:53:17.665302730Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" returns successfully" Mar 17 17:53:17.665718 containerd[1488]: time="2025-03-17T17:53:17.665410183Z" level=info msg="Ensure that sandbox c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8 in task-service has been cleanup successfully" Mar 17 17:53:17.668771 containerd[1488]: time="2025-03-17T17:53:17.665874816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:53:17.668112 systemd[1]: run-netns-cni\x2d72a43498\x2dbce8\x2d9f9f\x2da077\x2d17afdc0310c3.mount: Deactivated successfully. Mar 17 17:53:17.668220 systemd[1]: run-netns-cni\x2d244ee84a\x2d8857\x2d38f5\x2d251c\x2d1d52c365911d.mount: Deactivated successfully. Mar 17 17:53:17.670494 kubelet[2671]: I0317 17:53:17.669636 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5" Mar 17 17:53:17.670537 containerd[1488]: time="2025-03-17T17:53:17.668911204Z" level=info msg="TearDown network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" successfully" Mar 17 17:53:17.670537 containerd[1488]: time="2025-03-17T17:53:17.668931682Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" returns successfully" Mar 17 17:53:17.670537 containerd[1488]: time="2025-03-17T17:53:17.669985984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:1,}" Mar 17 17:53:17.670693 containerd[1488]: time="2025-03-17T17:53:17.670664911Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" Mar 17 17:53:17.671524 containerd[1488]: time="2025-03-17T17:53:17.671040728Z" level=info msg="Ensure that sandbox 9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5 in task-service has been cleanup successfully" Mar 17 17:53:17.671587 kubelet[2671]: I0317 17:53:17.671142 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9" Mar 17 17:53:17.674460 containerd[1488]: time="2025-03-17T17:53:17.673967910Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" Mar 17 17:53:17.674460 containerd[1488]: time="2025-03-17T17:53:17.674188906Z" level=info msg="TearDown network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" successfully" Mar 17 17:53:17.674460 containerd[1488]: time="2025-03-17T17:53:17.674209244Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" returns successfully" Mar 17 17:53:17.674460 containerd[1488]: time="2025-03-17T17:53:17.674250332Z" level=info msg="Ensure that sandbox a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9 in task-service has been cleanup successfully" Mar 17 17:53:17.674609 kubelet[2671]: E0317 17:53:17.674391 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:17.674609 kubelet[2671]: I0317 17:53:17.674586 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54" Mar 17 17:53:17.674606 systemd[1]: run-netns-cni\x2d5fca4407\x2df6d6\x2db6d4\x2d5045\x2d04ec6b8158da.mount: Deactivated successfully. Mar 17 17:53:17.674721 containerd[1488]: time="2025-03-17T17:53:17.674633191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:1,}" Mar 17 17:53:17.674821 containerd[1488]: time="2025-03-17T17:53:17.674802559Z" level=info msg="TearDown network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" successfully" Mar 17 17:53:17.674881 containerd[1488]: time="2025-03-17T17:53:17.674865498Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" returns successfully" Mar 17 17:53:17.675295 kubelet[2671]: E0317 17:53:17.675269 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:17.675691 containerd[1488]: time="2025-03-17T17:53:17.675488219Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" Mar 17 17:53:17.675763 containerd[1488]: time="2025-03-17T17:53:17.675624354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:1,}" Mar 17 17:53:17.676158 kubelet[2671]: I0317 17:53:17.676131 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16" Mar 17 17:53:17.676765 containerd[1488]: time="2025-03-17T17:53:17.676631447Z" level=info msg="Ensure that sandbox 5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54 in task-service has been cleanup successfully" Mar 17 17:53:17.677454 containerd[1488]: time="2025-03-17T17:53:17.676986715Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" Mar 17 17:53:17.677454 containerd[1488]: time="2025-03-17T17:53:17.676934427Z" level=info msg="TearDown network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" successfully" Mar 17 17:53:17.677454 containerd[1488]: time="2025-03-17T17:53:17.677100630Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" returns successfully" Mar 17 17:53:17.677454 containerd[1488]: time="2025-03-17T17:53:17.677197401Z" level=info msg="Ensure that sandbox 9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16 in task-service has been cleanup successfully" Mar 17 17:53:17.677651 containerd[1488]: time="2025-03-17T17:53:17.677596562Z" level=info msg="TearDown network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" successfully" Mar 17 17:53:17.677779 containerd[1488]: time="2025-03-17T17:53:17.677712280Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" returns successfully" Mar 17 17:53:17.678277 containerd[1488]: time="2025-03-17T17:53:17.678252835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:1,}" Mar 17 17:53:17.678737 containerd[1488]: time="2025-03-17T17:53:17.678695217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:1,}" Mar 17 17:53:17.679309 systemd[1]: run-netns-cni\x2d018710d0\x2ddae9\x2d9bbe\x2d8bc8\x2d4709732bea99.mount: Deactivated successfully. Mar 17 17:53:17.679628 systemd[1]: run-netns-cni\x2d9318c83c\x2d4454\x2d2af3\x2d4e69\x2dc7cf87c14532.mount: Deactivated successfully. Mar 17 17:53:17.679865 systemd[1]: run-netns-cni\x2d2b64729f\x2d1b4f\x2dd367\x2de9d1\x2dec81ad37eabe.mount: Deactivated successfully. Mar 17 17:53:18.445716 containerd[1488]: time="2025-03-17T17:53:18.445651497Z" level=error msg="Failed to destroy network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.446123 containerd[1488]: time="2025-03-17T17:53:18.446088358Z" level=error msg="encountered an error cleaning up failed sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.446187 containerd[1488]: time="2025-03-17T17:53:18.446166755Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.446480 kubelet[2671]: E0317 17:53:18.446441 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.446566 kubelet[2671]: E0317 17:53:18.446506 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:18.446566 kubelet[2671]: E0317 17:53:18.446536 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:18.446631 kubelet[2671]: E0317 17:53:18.446590 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" podUID="1c0a54b3-f431-4a56-aa22-15462ea1a4c4" Mar 17 17:53:18.471978 containerd[1488]: time="2025-03-17T17:53:18.471915963Z" level=error msg="Failed to destroy network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.472410 containerd[1488]: time="2025-03-17T17:53:18.472353376Z" level=error msg="encountered an error cleaning up failed sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.472457 containerd[1488]: time="2025-03-17T17:53:18.472429299Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.472666 kubelet[2671]: E0317 17:53:18.472631 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.472717 kubelet[2671]: E0317 17:53:18.472691 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:18.472763 kubelet[2671]: E0317 17:53:18.472719 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:18.472801 kubelet[2671]: E0317 17:53:18.472774 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" podUID="bd182690-c4b5-49c8-a36b-6df7c7b0c5b6" Mar 17 17:53:18.500033 containerd[1488]: time="2025-03-17T17:53:18.499966438Z" level=error msg="Failed to destroy network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.500496 containerd[1488]: time="2025-03-17T17:53:18.500453123Z" level=error msg="encountered an error cleaning up failed sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.500539 containerd[1488]: time="2025-03-17T17:53:18.500523535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.500843 kubelet[2671]: E0317 17:53:18.500783 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.500914 kubelet[2671]: E0317 17:53:18.500854 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:18.500914 kubelet[2671]: E0317 17:53:18.500885 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:18.500967 kubelet[2671]: E0317 17:53:18.500928 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-z8gdg" podUID="a4788825-63ac-4824-9f76-78d12b4feedf" Mar 17 17:53:18.590507 containerd[1488]: time="2025-03-17T17:53:18.590428573Z" level=error msg="Failed to destroy network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.591379 containerd[1488]: time="2025-03-17T17:53:18.591326561Z" level=error msg="encountered an error cleaning up failed sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.591432 containerd[1488]: time="2025-03-17T17:53:18.591404838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.591744 kubelet[2671]: E0317 17:53:18.591664 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.591744 kubelet[2671]: E0317 17:53:18.591739 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:18.591955 kubelet[2671]: E0317 17:53:18.591781 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:18.591955 kubelet[2671]: E0317 17:53:18.591829 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-48wgq" podUID="61c49c7a-fa99-4721-add6-729b5d3cd3c9" Mar 17 17:53:18.635879 containerd[1488]: time="2025-03-17T17:53:18.635820884Z" level=error msg="Failed to destroy network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.636294 containerd[1488]: time="2025-03-17T17:53:18.636255761Z" level=error msg="encountered an error cleaning up failed sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.636357 containerd[1488]: time="2025-03-17T17:53:18.636325592Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.636679 kubelet[2671]: E0317 17:53:18.636600 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.636731 kubelet[2671]: E0317 17:53:18.636680 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:18.636731 kubelet[2671]: E0317 17:53:18.636707 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:18.637232 kubelet[2671]: E0317 17:53:18.636792 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" podUID="ecb46531-5f4a-4235-a229-80c727efd6de" Mar 17 17:53:18.656862 containerd[1488]: time="2025-03-17T17:53:18.656796951Z" level=error msg="Failed to destroy network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.657227 containerd[1488]: time="2025-03-17T17:53:18.657198015Z" level=error msg="encountered an error cleaning up failed sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.657273 containerd[1488]: time="2025-03-17T17:53:18.657254731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.657512 kubelet[2671]: E0317 17:53:18.657467 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:18.657570 kubelet[2671]: E0317 17:53:18.657525 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:18.657570 kubelet[2671]: E0317 17:53:18.657546 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:18.657632 kubelet[2671]: E0317 17:53:18.657586 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:18.679169 kubelet[2671]: I0317 17:53:18.679133 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52" Mar 17 17:53:18.679721 containerd[1488]: time="2025-03-17T17:53:18.679688538Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\"" Mar 17 17:53:18.680869 containerd[1488]: time="2025-03-17T17:53:18.679919332Z" level=info msg="Ensure that sandbox 57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52 in task-service has been cleanup successfully" Mar 17 17:53:18.680869 containerd[1488]: time="2025-03-17T17:53:18.680108758Z" level=info msg="TearDown network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" successfully" Mar 17 17:53:18.680869 containerd[1488]: time="2025-03-17T17:53:18.680120801Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" returns successfully" Mar 17 17:53:18.681142 kubelet[2671]: I0317 17:53:18.681112 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52" Mar 17 17:53:18.681537 containerd[1488]: time="2025-03-17T17:53:18.681496616Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\"" Mar 17 17:53:18.681711 containerd[1488]: time="2025-03-17T17:53:18.681692575Z" level=info msg="Ensure that sandbox 961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52 in task-service has been cleanup successfully" Mar 17 17:53:18.682003 containerd[1488]: time="2025-03-17T17:53:18.681893683Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" Mar 17 17:53:18.682003 containerd[1488]: time="2025-03-17T17:53:18.681964998Z" level=info msg="TearDown network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" successfully" Mar 17 17:53:18.682003 containerd[1488]: time="2025-03-17T17:53:18.681974205Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" returns successfully" Mar 17 17:53:18.682460 systemd[1]: run-netns-cni\x2de6c2eb25\x2d7f76\x2d7204\x2d8970\x2da16b45e2d626.mount: Deactivated successfully. Mar 17 17:53:18.682776 containerd[1488]: time="2025-03-17T17:53:18.682634426Z" level=info msg="TearDown network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" successfully" Mar 17 17:53:18.682776 containerd[1488]: time="2025-03-17T17:53:18.682647801Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" returns successfully" Mar 17 17:53:18.683981 containerd[1488]: time="2025-03-17T17:53:18.683560847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:2,}" Mar 17 17:53:18.684230 kubelet[2671]: I0317 17:53:18.684185 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f" Mar 17 17:53:18.684594 containerd[1488]: time="2025-03-17T17:53:18.684573270Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\"" Mar 17 17:53:18.684772 containerd[1488]: time="2025-03-17T17:53:18.684735063Z" level=info msg="Ensure that sandbox b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f in task-service has been cleanup successfully" Mar 17 17:53:18.685392 containerd[1488]: time="2025-03-17T17:53:18.685354137Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" Mar 17 17:53:18.685509 containerd[1488]: time="2025-03-17T17:53:18.685438186Z" level=info msg="TearDown network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" successfully" Mar 17 17:53:18.685509 containerd[1488]: time="2025-03-17T17:53:18.685449307Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" returns successfully" Mar 17 17:53:18.686529 containerd[1488]: time="2025-03-17T17:53:18.685839941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:53:18.686529 containerd[1488]: time="2025-03-17T17:53:18.685846453Z" level=info msg="TearDown network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" successfully" Mar 17 17:53:18.686529 containerd[1488]: time="2025-03-17T17:53:18.686011613Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" returns successfully" Mar 17 17:53:18.686238 systemd[1]: run-netns-cni\x2df96f02ee\x2db1b5\x2dbfac\x2dfb46\x2d9a397b437529.mount: Deactivated successfully. Mar 17 17:53:18.686742 kubelet[2671]: I0317 17:53:18.686218 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a" Mar 17 17:53:18.686965 containerd[1488]: time="2025-03-17T17:53:18.686942213Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\"" Mar 17 17:53:18.687575 containerd[1488]: time="2025-03-17T17:53:18.686957361Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" Mar 17 17:53:18.687575 containerd[1488]: time="2025-03-17T17:53:18.687260110Z" level=info msg="Ensure that sandbox 13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a in task-service has been cleanup successfully" Mar 17 17:53:18.687575 containerd[1488]: time="2025-03-17T17:53:18.687325553Z" level=info msg="TearDown network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" successfully" Mar 17 17:53:18.687575 containerd[1488]: time="2025-03-17T17:53:18.687346713Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" returns successfully" Mar 17 17:53:18.687802 containerd[1488]: time="2025-03-17T17:53:18.687781661Z" level=info msg="TearDown network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" successfully" Mar 17 17:53:18.687971 containerd[1488]: time="2025-03-17T17:53:18.687953363Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" returns successfully" Mar 17 17:53:18.688505 containerd[1488]: time="2025-03-17T17:53:18.687877450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:2,}" Mar 17 17:53:18.689145 kubelet[2671]: I0317 17:53:18.688895 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6" Mar 17 17:53:18.689197 containerd[1488]: time="2025-03-17T17:53:18.689003326Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" Mar 17 17:53:18.689197 containerd[1488]: time="2025-03-17T17:53:18.689086503Z" level=info msg="TearDown network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" successfully" Mar 17 17:53:18.689197 containerd[1488]: time="2025-03-17T17:53:18.689098656Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" returns successfully" Mar 17 17:53:18.689173 systemd[1]: run-netns-cni\x2ddebadaf6\x2d3c14\x2d54ae\x2d5bd7\x2d3baf5f040f71.mount: Deactivated successfully. Mar 17 17:53:18.689906 kubelet[2671]: E0317 17:53:18.689696 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:18.690382 containerd[1488]: time="2025-03-17T17:53:18.690059632Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\"" Mar 17 17:53:18.690382 containerd[1488]: time="2025-03-17T17:53:18.690064401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:2,}" Mar 17 17:53:18.690382 containerd[1488]: time="2025-03-17T17:53:18.690257384Z" level=info msg="Ensure that sandbox 5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6 in task-service has been cleanup successfully" Mar 17 17:53:18.690658 containerd[1488]: time="2025-03-17T17:53:18.690637348Z" level=info msg="TearDown network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" successfully" Mar 17 17:53:18.690843 containerd[1488]: time="2025-03-17T17:53:18.690825963Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" returns successfully" Mar 17 17:53:18.691096 containerd[1488]: time="2025-03-17T17:53:18.691067046Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" Mar 17 17:53:18.691232 containerd[1488]: time="2025-03-17T17:53:18.691204253Z" level=info msg="TearDown network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" successfully" Mar 17 17:53:18.691232 containerd[1488]: time="2025-03-17T17:53:18.691224681Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" returns successfully" Mar 17 17:53:18.691312 kubelet[2671]: I0317 17:53:18.691209 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e" Mar 17 17:53:18.691597 containerd[1488]: time="2025-03-17T17:53:18.691576383Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\"" Mar 17 17:53:18.691826 kubelet[2671]: E0317 17:53:18.691804 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:18.692014 containerd[1488]: time="2025-03-17T17:53:18.691994849Z" level=info msg="Ensure that sandbox 176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e in task-service has been cleanup successfully" Mar 17 17:53:18.692279 containerd[1488]: time="2025-03-17T17:53:18.692019766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:2,}" Mar 17 17:53:18.692279 containerd[1488]: time="2025-03-17T17:53:18.692261651Z" level=info msg="TearDown network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" successfully" Mar 17 17:53:18.692279 containerd[1488]: time="2025-03-17T17:53:18.692276228Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" returns successfully" Mar 17 17:53:18.692384 systemd[1]: run-netns-cni\x2d83d1200c\x2dca9b\x2d0523\x2d04ca\x2de7f50d37b9e6.mount: Deactivated successfully. Mar 17 17:53:18.692705 containerd[1488]: time="2025-03-17T17:53:18.692641114Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" Mar 17 17:53:18.692786 containerd[1488]: time="2025-03-17T17:53:18.692745160Z" level=info msg="TearDown network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" successfully" Mar 17 17:53:18.692786 containerd[1488]: time="2025-03-17T17:53:18.692780036Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" returns successfully" Mar 17 17:53:18.693161 containerd[1488]: time="2025-03-17T17:53:18.693139791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:2,}" Mar 17 17:53:19.661982 systemd[1]: run-netns-cni\x2db7d3ef27\x2d81ea\x2dd668\x2d6e4f\x2d8a43661ead77.mount: Deactivated successfully. Mar 17 17:53:19.662127 systemd[1]: run-netns-cni\x2dd5994b02\x2df222\x2d715f\x2d10da\x2dbcdef04db89a.mount: Deactivated successfully. Mar 17 17:53:20.260912 systemd[1]: Started sshd@8-10.0.0.117:22-10.0.0.1:45368.service - OpenSSH per-connection server daemon (10.0.0.1:45368). Mar 17 17:53:20.312027 sshd[3889]: Accepted publickey for core from 10.0.0.1 port 45368 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:20.313797 sshd-session[3889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:20.318375 systemd-logind[1472]: New session 9 of user core. Mar 17 17:53:20.329972 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:53:20.463569 sshd[3891]: Connection closed by 10.0.0.1 port 45368 Mar 17 17:53:20.463999 sshd-session[3889]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:20.468322 systemd[1]: sshd@8-10.0.0.117:22-10.0.0.1:45368.service: Deactivated successfully. Mar 17 17:53:20.470707 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:53:20.471525 systemd-logind[1472]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:53:20.472639 systemd-logind[1472]: Removed session 9. Mar 17 17:53:21.860504 containerd[1488]: time="2025-03-17T17:53:21.860431782Z" level=error msg="Failed to destroy network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.861047 containerd[1488]: time="2025-03-17T17:53:21.860875927Z" level=error msg="encountered an error cleaning up failed sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.861047 containerd[1488]: time="2025-03-17T17:53:21.860936240Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.862194 kubelet[2671]: E0317 17:53:21.861336 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.862194 kubelet[2671]: E0317 17:53:21.861393 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:21.862194 kubelet[2671]: E0317 17:53:21.861415 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:21.862552 kubelet[2671]: E0317 17:53:21.861452 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" podUID="1c0a54b3-f431-4a56-aa22-15462ea1a4c4" Mar 17 17:53:21.880366 containerd[1488]: time="2025-03-17T17:53:21.878853673Z" level=error msg="Failed to destroy network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.880366 containerd[1488]: time="2025-03-17T17:53:21.879521458Z" level=error msg="encountered an error cleaning up failed sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.880366 containerd[1488]: time="2025-03-17T17:53:21.879612549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.880840 kubelet[2671]: E0317 17:53:21.879870 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.880840 kubelet[2671]: E0317 17:53:21.879914 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:21.880840 kubelet[2671]: E0317 17:53:21.879934 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:21.881167 kubelet[2671]: E0317 17:53:21.879968 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" podUID="bd182690-c4b5-49c8-a36b-6df7c7b0c5b6" Mar 17 17:53:21.896392 containerd[1488]: time="2025-03-17T17:53:21.896349565Z" level=error msg="Failed to destroy network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.897090 containerd[1488]: time="2025-03-17T17:53:21.896952928Z" level=error msg="encountered an error cleaning up failed sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.905176 containerd[1488]: time="2025-03-17T17:53:21.905084786Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.905572 kubelet[2671]: E0317 17:53:21.905535 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.905660 kubelet[2671]: E0317 17:53:21.905594 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:21.905660 kubelet[2671]: E0317 17:53:21.905614 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:21.905741 kubelet[2671]: E0317 17:53:21.905658 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:21.921590 containerd[1488]: time="2025-03-17T17:53:21.921537558Z" level=error msg="Failed to destroy network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.921944 containerd[1488]: time="2025-03-17T17:53:21.921922051Z" level=error msg="encountered an error cleaning up failed sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.922341 containerd[1488]: time="2025-03-17T17:53:21.922315570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.922715 kubelet[2671]: E0317 17:53:21.922531 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.922715 kubelet[2671]: E0317 17:53:21.922592 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:21.922715 kubelet[2671]: E0317 17:53:21.922615 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:21.922872 kubelet[2671]: E0317 17:53:21.922658 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" podUID="ecb46531-5f4a-4235-a229-80c727efd6de" Mar 17 17:53:21.977775 containerd[1488]: time="2025-03-17T17:53:21.977709535Z" level=error msg="Failed to destroy network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.979216 containerd[1488]: time="2025-03-17T17:53:21.978924738Z" level=error msg="encountered an error cleaning up failed sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.979216 containerd[1488]: time="2025-03-17T17:53:21.978982346Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.979362 kubelet[2671]: E0317 17:53:21.979312 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.979509 kubelet[2671]: E0317 17:53:21.979477 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:21.979615 kubelet[2671]: E0317 17:53:21.979595 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:21.980049 kubelet[2671]: E0317 17:53:21.979730 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-z8gdg" podUID="a4788825-63ac-4824-9f76-78d12b4feedf" Mar 17 17:53:21.986978 containerd[1488]: time="2025-03-17T17:53:21.986915381Z" level=error msg="Failed to destroy network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.987497 containerd[1488]: time="2025-03-17T17:53:21.987458351Z" level=error msg="encountered an error cleaning up failed sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.990483 containerd[1488]: time="2025-03-17T17:53:21.990451214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.990859 kubelet[2671]: E0317 17:53:21.990722 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:21.991097 kubelet[2671]: E0317 17:53:21.990946 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:21.991097 kubelet[2671]: E0317 17:53:21.990978 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:21.991097 kubelet[2671]: E0317 17:53:21.991044 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-48wgq" podUID="61c49c7a-fa99-4721-add6-729b5d3cd3c9" Mar 17 17:53:22.111320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3-shm.mount: Deactivated successfully. Mar 17 17:53:22.112866 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687-shm.mount: Deactivated successfully. Mar 17 17:53:22.706572 kubelet[2671]: I0317 17:53:22.706523 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3" Mar 17 17:53:22.707704 containerd[1488]: time="2025-03-17T17:53:22.707415386Z" level=info msg="StopPodSandbox for \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\"" Mar 17 17:53:22.707704 containerd[1488]: time="2025-03-17T17:53:22.707655356Z" level=info msg="Ensure that sandbox 63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3 in task-service has been cleanup successfully" Mar 17 17:53:22.708044 containerd[1488]: time="2025-03-17T17:53:22.707883023Z" level=info msg="TearDown network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" successfully" Mar 17 17:53:22.708044 containerd[1488]: time="2025-03-17T17:53:22.707903282Z" level=info msg="StopPodSandbox for \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" returns successfully" Mar 17 17:53:22.708701 containerd[1488]: time="2025-03-17T17:53:22.708542683Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\"" Mar 17 17:53:22.708701 containerd[1488]: time="2025-03-17T17:53:22.708646828Z" level=info msg="TearDown network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" successfully" Mar 17 17:53:22.708701 containerd[1488]: time="2025-03-17T17:53:22.708659212Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" returns successfully" Mar 17 17:53:22.709464 containerd[1488]: time="2025-03-17T17:53:22.709447092Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" Mar 17 17:53:22.709763 containerd[1488]: time="2025-03-17T17:53:22.709586915Z" level=info msg="TearDown network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" successfully" Mar 17 17:53:22.709763 containerd[1488]: time="2025-03-17T17:53:22.709643361Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" returns successfully" Mar 17 17:53:22.709856 kubelet[2671]: I0317 17:53:22.709692 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687" Mar 17 17:53:22.710326 containerd[1488]: time="2025-03-17T17:53:22.710219563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:3,}" Mar 17 17:53:22.711022 containerd[1488]: time="2025-03-17T17:53:22.710579790Z" level=info msg="StopPodSandbox for \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\"" Mar 17 17:53:22.711022 containerd[1488]: time="2025-03-17T17:53:22.710775728Z" level=info msg="Ensure that sandbox 50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687 in task-service has been cleanup successfully" Mar 17 17:53:22.711439 containerd[1488]: time="2025-03-17T17:53:22.711419948Z" level=info msg="TearDown network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" successfully" Mar 17 17:53:22.711552 containerd[1488]: time="2025-03-17T17:53:22.711536738Z" level=info msg="StopPodSandbox for \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" returns successfully" Mar 17 17:53:22.712160 containerd[1488]: time="2025-03-17T17:53:22.712142927Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\"" Mar 17 17:53:22.712193 systemd[1]: run-netns-cni\x2d45e13d88\x2d000e\x2d3f87\x2da578\x2d59232abfd682.mount: Deactivated successfully. Mar 17 17:53:22.712511 containerd[1488]: time="2025-03-17T17:53:22.712492784Z" level=info msg="TearDown network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" successfully" Mar 17 17:53:22.712572 containerd[1488]: time="2025-03-17T17:53:22.712560070Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" returns successfully" Mar 17 17:53:22.714453 containerd[1488]: time="2025-03-17T17:53:22.714416958Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" Mar 17 17:53:22.714558 containerd[1488]: time="2025-03-17T17:53:22.714529459Z" level=info msg="TearDown network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" successfully" Mar 17 17:53:22.714558 containerd[1488]: time="2025-03-17T17:53:22.714553785Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" returns successfully" Mar 17 17:53:22.715229 kubelet[2671]: I0317 17:53:22.714826 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3" Mar 17 17:53:22.715377 containerd[1488]: time="2025-03-17T17:53:22.714942726Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:53:22.715555 systemd[1]: run-netns-cni\x2d13e7b74c\x2d157f\x2d06a2\x2d9226\x2de5c351e077d8.mount: Deactivated successfully. Mar 17 17:53:22.716784 containerd[1488]: time="2025-03-17T17:53:22.715767335Z" level=info msg="StopPodSandbox for \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\"" Mar 17 17:53:22.720554 kubelet[2671]: I0317 17:53:22.720530 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04" Mar 17 17:53:22.721041 containerd[1488]: time="2025-03-17T17:53:22.721017477Z" level=info msg="StopPodSandbox for \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\"" Mar 17 17:53:22.721206 containerd[1488]: time="2025-03-17T17:53:22.721188639Z" level=info msg="Ensure that sandbox f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04 in task-service has been cleanup successfully" Mar 17 17:53:22.723281 kubelet[2671]: I0317 17:53:22.723246 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287" Mar 17 17:53:22.723476 systemd[1]: run-netns-cni\x2d11260445\x2daf3e\x2d66de\x2d0752\x2d9cbd6b0657fc.mount: Deactivated successfully. Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.723709204Z" level=info msg="TearDown network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" successfully" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.723728460Z" level=info msg="StopPodSandbox for \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" returns successfully" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.723876018Z" level=info msg="StopPodSandbox for \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\"" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.724007154Z" level=info msg="Ensure that sandbox 1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287 in task-service has been cleanup successfully" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.724143951Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\"" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.724252315Z" level=info msg="TearDown network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" successfully" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.724281008Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" returns successfully" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.724270749Z" level=info msg="TearDown network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" successfully" Mar 17 17:53:22.724544 containerd[1488]: time="2025-03-17T17:53:22.724324189Z" level=info msg="StopPodSandbox for \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" returns successfully" Mar 17 17:53:22.726327 containerd[1488]: time="2025-03-17T17:53:22.724850338Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" Mar 17 17:53:22.726327 containerd[1488]: time="2025-03-17T17:53:22.724943102Z" level=info msg="TearDown network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" successfully" Mar 17 17:53:22.726327 containerd[1488]: time="2025-03-17T17:53:22.724955947Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" returns successfully" Mar 17 17:53:22.726327 containerd[1488]: time="2025-03-17T17:53:22.725070442Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\"" Mar 17 17:53:22.726327 containerd[1488]: time="2025-03-17T17:53:22.725146875Z" level=info msg="TearDown network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" successfully" Mar 17 17:53:22.726327 containerd[1488]: time="2025-03-17T17:53:22.725159328Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" returns successfully" Mar 17 17:53:22.726953 kubelet[2671]: E0317 17:53:22.726927 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:22.727292 containerd[1488]: time="2025-03-17T17:53:22.727236792Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" Mar 17 17:53:22.727349 containerd[1488]: time="2025-03-17T17:53:22.727332581Z" level=info msg="TearDown network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" successfully" Mar 17 17:53:22.727349 containerd[1488]: time="2025-03-17T17:53:22.727346207Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" returns successfully" Mar 17 17:53:22.727418 containerd[1488]: time="2025-03-17T17:53:22.727404426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:3,}" Mar 17 17:53:22.727374 systemd[1]: run-netns-cni\x2d6fd44996\x2d6c11\x2da19f\x2dc0de\x2d29160a2af598.mount: Deactivated successfully. Mar 17 17:53:22.728098 kubelet[2671]: E0317 17:53:22.727996 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:22.728288 containerd[1488]: time="2025-03-17T17:53:22.728223274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:3,}" Mar 17 17:53:22.728823 kubelet[2671]: I0317 17:53:22.728786 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836" Mar 17 17:53:22.729438 containerd[1488]: time="2025-03-17T17:53:22.729147601Z" level=info msg="StopPodSandbox for \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\"" Mar 17 17:53:23.757566 containerd[1488]: time="2025-03-17T17:53:23.755420771Z" level=info msg="Ensure that sandbox cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836 in task-service has been cleanup successfully" Mar 17 17:53:23.757566 containerd[1488]: time="2025-03-17T17:53:23.757133939Z" level=info msg="TearDown network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" successfully" Mar 17 17:53:23.757566 containerd[1488]: time="2025-03-17T17:53:23.757154858Z" level=info msg="StopPodSandbox for \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" returns successfully" Mar 17 17:53:23.760243 systemd[1]: run-netns-cni\x2d484ccc45\x2d810f\x2d3fcf\x2dc126\x2da3e32bd14c59.mount: Deactivated successfully. Mar 17 17:53:23.767506 containerd[1488]: time="2025-03-17T17:53:23.767426311Z" level=info msg="Ensure that sandbox 5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3 in task-service has been cleanup successfully" Mar 17 17:53:23.769305 containerd[1488]: time="2025-03-17T17:53:23.769177290Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\"" Mar 17 17:53:23.770395 containerd[1488]: time="2025-03-17T17:53:23.769988694Z" level=info msg="TearDown network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" successfully" Mar 17 17:53:23.770395 containerd[1488]: time="2025-03-17T17:53:23.770026415Z" level=info msg="StopPodSandbox for \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" returns successfully" Mar 17 17:53:23.772244 containerd[1488]: time="2025-03-17T17:53:23.772148351Z" level=info msg="TearDown network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" successfully" Mar 17 17:53:23.772244 containerd[1488]: time="2025-03-17T17:53:23.772228341Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" returns successfully" Mar 17 17:53:23.773460 systemd[1]: run-netns-cni\x2d13d4c3d0\x2dcd20\x2de959\x2df5b2\x2d9f69fc38c5be.mount: Deactivated successfully. Mar 17 17:53:23.775667 containerd[1488]: time="2025-03-17T17:53:23.775219679Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\"" Mar 17 17:53:23.776480 containerd[1488]: time="2025-03-17T17:53:23.775384920Z" level=info msg="TearDown network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" successfully" Mar 17 17:53:23.776480 containerd[1488]: time="2025-03-17T17:53:23.776419012Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" returns successfully" Mar 17 17:53:23.777346 containerd[1488]: time="2025-03-17T17:53:23.777293725Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" Mar 17 17:53:23.777941 containerd[1488]: time="2025-03-17T17:53:23.777494863Z" level=info msg="TearDown network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" successfully" Mar 17 17:53:23.777941 containerd[1488]: time="2025-03-17T17:53:23.777513318Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" returns successfully" Mar 17 17:53:23.778548 containerd[1488]: time="2025-03-17T17:53:23.778150565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:3,}" Mar 17 17:53:23.784490 containerd[1488]: time="2025-03-17T17:53:23.784421694Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" Mar 17 17:53:23.784664 containerd[1488]: time="2025-03-17T17:53:23.784598246Z" level=info msg="TearDown network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" successfully" Mar 17 17:53:23.784664 containerd[1488]: time="2025-03-17T17:53:23.784660603Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" returns successfully" Mar 17 17:53:23.787075 containerd[1488]: time="2025-03-17T17:53:23.787035594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:3,}" Mar 17 17:53:24.063192 containerd[1488]: time="2025-03-17T17:53:24.063046346Z" level=error msg="Failed to destroy network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.063525 containerd[1488]: time="2025-03-17T17:53:24.063453530Z" level=error msg="encountered an error cleaning up failed sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.063525 containerd[1488]: time="2025-03-17T17:53:24.063512320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.064125 kubelet[2671]: E0317 17:53:24.063792 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.064125 kubelet[2671]: E0317 17:53:24.063855 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:24.064125 kubelet[2671]: E0317 17:53:24.063878 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" Mar 17 17:53:24.064510 kubelet[2671]: E0317 17:53:24.063914 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-766ddbd77b-gscbd_calico-system(ecb46531-5f4a-4235-a229-80c727efd6de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" podUID="ecb46531-5f4a-4235-a229-80c727efd6de" Mar 17 17:53:24.071395 containerd[1488]: time="2025-03-17T17:53:24.071332368Z" level=error msg="Failed to destroy network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.071797 containerd[1488]: time="2025-03-17T17:53:24.071770731Z" level=error msg="encountered an error cleaning up failed sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.071864 containerd[1488]: time="2025-03-17T17:53:24.071842055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.072084 kubelet[2671]: E0317 17:53:24.072032 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.072134 kubelet[2671]: E0317 17:53:24.072102 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:24.072134 kubelet[2671]: E0317 17:53:24.072122 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsx6f" Mar 17 17:53:24.072180 kubelet[2671]: E0317 17:53:24.072161 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qsx6f_calico-system(0a462e4e-6356-4301-958a-de536414d963)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qsx6f" podUID="0a462e4e-6356-4301-958a-de536414d963" Mar 17 17:53:24.094696 containerd[1488]: time="2025-03-17T17:53:24.094568358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:24.095604 containerd[1488]: time="2025-03-17T17:53:24.095566162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:53:24.099270 containerd[1488]: time="2025-03-17T17:53:24.099214164Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:24.105125 containerd[1488]: time="2025-03-17T17:53:24.103840613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:24.105125 containerd[1488]: time="2025-03-17T17:53:24.104670912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 7.439868596s" Mar 17 17:53:24.105125 containerd[1488]: time="2025-03-17T17:53:24.104701299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:53:24.119091 containerd[1488]: time="2025-03-17T17:53:24.119046752Z" level=info msg="CreateContainer within sandbox \"a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:53:24.142026 containerd[1488]: time="2025-03-17T17:53:24.141869186Z" level=info msg="CreateContainer within sandbox \"a4e23fbcea17df303750b5d951fef75615f384e898ca681e3f64feb3bb21cf82\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2de8c75da8ecb6aa3c6a0986d9107694470ec64723a2fdf5807ae94f89a266ab\"" Mar 17 17:53:24.143189 containerd[1488]: time="2025-03-17T17:53:24.143164519Z" level=info msg="StartContainer for \"2de8c75da8ecb6aa3c6a0986d9107694470ec64723a2fdf5807ae94f89a266ab\"" Mar 17 17:53:24.177172 containerd[1488]: time="2025-03-17T17:53:24.177092280Z" level=error msg="Failed to destroy network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.177715 containerd[1488]: time="2025-03-17T17:53:24.177670645Z" level=error msg="encountered an error cleaning up failed sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.177867 containerd[1488]: time="2025-03-17T17:53:24.177806722Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.178188 kubelet[2671]: E0317 17:53:24.178147 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.178330 kubelet[2671]: E0317 17:53:24.178308 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:24.178423 kubelet[2671]: E0317 17:53:24.178404 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" Mar 17 17:53:24.178577 kubelet[2671]: E0317 17:53:24.178534 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-jrsd6_calico-apiserver(bd182690-c4b5-49c8-a36b-6df7c7b0c5b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" podUID="bd182690-c4b5-49c8-a36b-6df7c7b0c5b6" Mar 17 17:53:24.179350 containerd[1488]: time="2025-03-17T17:53:24.179278605Z" level=error msg="Failed to destroy network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.180419 containerd[1488]: time="2025-03-17T17:53:24.180369705Z" level=error msg="encountered an error cleaning up failed sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.180595 containerd[1488]: time="2025-03-17T17:53:24.180556225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.181021 kubelet[2671]: E0317 17:53:24.180854 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.181021 kubelet[2671]: E0317 17:53:24.180893 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:24.181021 kubelet[2671]: E0317 17:53:24.180916 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" Mar 17 17:53:24.181159 kubelet[2671]: E0317 17:53:24.180960 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69489b6559-dszht_calico-apiserver(1c0a54b3-f431-4a56-aa22-15462ea1a4c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" podUID="1c0a54b3-f431-4a56-aa22-15462ea1a4c4" Mar 17 17:53:24.184902 containerd[1488]: time="2025-03-17T17:53:24.184726999Z" level=error msg="Failed to destroy network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.185168 containerd[1488]: time="2025-03-17T17:53:24.185138582Z" level=error msg="encountered an error cleaning up failed sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.185252 containerd[1488]: time="2025-03-17T17:53:24.185208503Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.185421 kubelet[2671]: E0317 17:53:24.185395 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.185455 kubelet[2671]: E0317 17:53:24.185439 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:24.185479 kubelet[2671]: E0317 17:53:24.185460 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-48wgq" Mar 17 17:53:24.185528 kubelet[2671]: E0317 17:53:24.185500 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-48wgq_kube-system(61c49c7a-fa99-4721-add6-729b5d3cd3c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-48wgq" podUID="61c49c7a-fa99-4721-add6-729b5d3cd3c9" Mar 17 17:53:24.194933 containerd[1488]: time="2025-03-17T17:53:24.194886099Z" level=error msg="Failed to destroy network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.195332 containerd[1488]: time="2025-03-17T17:53:24.195290959Z" level=error msg="encountered an error cleaning up failed sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.195368 containerd[1488]: time="2025-03-17T17:53:24.195352365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.195618 kubelet[2671]: E0317 17:53:24.195568 2671 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:53:24.195706 kubelet[2671]: E0317 17:53:24.195627 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:24.195706 kubelet[2671]: E0317 17:53:24.195650 2671 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-z8gdg" Mar 17 17:53:24.195706 kubelet[2671]: E0317 17:53:24.195694 2671 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-z8gdg_kube-system(a4788825-63ac-4824-9f76-78d12b4feedf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-z8gdg" podUID="a4788825-63ac-4824-9f76-78d12b4feedf" Mar 17 17:53:24.237072 systemd[1]: Started cri-containerd-2de8c75da8ecb6aa3c6a0986d9107694470ec64723a2fdf5807ae94f89a266ab.scope - libcontainer container 2de8c75da8ecb6aa3c6a0986d9107694470ec64723a2fdf5807ae94f89a266ab. Mar 17 17:53:24.323254 containerd[1488]: time="2025-03-17T17:53:24.323108368Z" level=info msg="StartContainer for \"2de8c75da8ecb6aa3c6a0986d9107694470ec64723a2fdf5807ae94f89a266ab\" returns successfully" Mar 17 17:53:24.357210 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:53:24.357363 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:53:24.761946 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6-shm.mount: Deactivated successfully. Mar 17 17:53:24.763736 kubelet[2671]: I0317 17:53:24.762083 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614" Mar 17 17:53:24.763843 containerd[1488]: time="2025-03-17T17:53:24.763460240Z" level=info msg="StopPodSandbox for \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\"" Mar 17 17:53:24.763843 containerd[1488]: time="2025-03-17T17:53:24.763696404Z" level=info msg="Ensure that sandbox aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614 in task-service has been cleanup successfully" Mar 17 17:53:24.762058 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614-shm.mount: Deactivated successfully. Mar 17 17:53:24.764165 containerd[1488]: time="2025-03-17T17:53:24.764014081Z" level=info msg="TearDown network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\" successfully" Mar 17 17:53:24.764165 containerd[1488]: time="2025-03-17T17:53:24.764030341Z" level=info msg="StopPodSandbox for \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\" returns successfully" Mar 17 17:53:24.762131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2708157942.mount: Deactivated successfully. Mar 17 17:53:24.764398 containerd[1488]: time="2025-03-17T17:53:24.764370300Z" level=info msg="StopPodSandbox for \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\"" Mar 17 17:53:24.764482 containerd[1488]: time="2025-03-17T17:53:24.764453145Z" level=info msg="TearDown network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" successfully" Mar 17 17:53:24.764482 containerd[1488]: time="2025-03-17T17:53:24.764469246Z" level=info msg="StopPodSandbox for \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" returns successfully" Mar 17 17:53:24.764960 containerd[1488]: time="2025-03-17T17:53:24.764928037Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\"" Mar 17 17:53:24.765055 containerd[1488]: time="2025-03-17T17:53:24.765026803Z" level=info msg="TearDown network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" successfully" Mar 17 17:53:24.765055 containerd[1488]: time="2025-03-17T17:53:24.765049806Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" returns successfully" Mar 17 17:53:24.766849 systemd[1]: run-netns-cni\x2dc9f8c44c\x2ddfed\x2d19ce\x2d56f5\x2d716619141e1a.mount: Deactivated successfully. Mar 17 17:53:24.767028 kubelet[2671]: E0317 17:53:24.767011 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:24.767699 containerd[1488]: time="2025-03-17T17:53:24.767244889Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" Mar 17 17:53:24.767699 containerd[1488]: time="2025-03-17T17:53:24.767365225Z" level=info msg="TearDown network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" successfully" Mar 17 17:53:24.767699 containerd[1488]: time="2025-03-17T17:53:24.767379281Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" returns successfully" Mar 17 17:53:24.769302 containerd[1488]: time="2025-03-17T17:53:24.768963176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:4,}" Mar 17 17:53:24.770342 kubelet[2671]: I0317 17:53:24.770318 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6" Mar 17 17:53:24.771006 containerd[1488]: time="2025-03-17T17:53:24.770833449Z" level=info msg="StopPodSandbox for \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\"" Mar 17 17:53:24.771062 containerd[1488]: time="2025-03-17T17:53:24.771050546Z" level=info msg="Ensure that sandbox 623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6 in task-service has been cleanup successfully" Mar 17 17:53:24.771606 containerd[1488]: time="2025-03-17T17:53:24.771459154Z" level=info msg="TearDown network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\" successfully" Mar 17 17:53:24.771606 containerd[1488]: time="2025-03-17T17:53:24.771482388Z" level=info msg="StopPodSandbox for \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\" returns successfully" Mar 17 17:53:24.772352 containerd[1488]: time="2025-03-17T17:53:24.772219812Z" level=info msg="StopPodSandbox for \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\"" Mar 17 17:53:24.773350 containerd[1488]: time="2025-03-17T17:53:24.772313137Z" level=info msg="TearDown network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" successfully" Mar 17 17:53:24.773402 containerd[1488]: time="2025-03-17T17:53:24.773347219Z" level=info msg="StopPodSandbox for \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" returns successfully" Mar 17 17:53:24.773780 containerd[1488]: time="2025-03-17T17:53:24.773727735Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\"" Mar 17 17:53:24.773965 containerd[1488]: time="2025-03-17T17:53:24.773854583Z" level=info msg="TearDown network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" successfully" Mar 17 17:53:24.773965 containerd[1488]: time="2025-03-17T17:53:24.773871635Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" returns successfully" Mar 17 17:53:24.773905 systemd[1]: run-netns-cni\x2dd37bc2e7\x2d4467\x2dfd39\x2db80f\x2d4cfdda67b54a.mount: Deactivated successfully. Mar 17 17:53:24.774274 containerd[1488]: time="2025-03-17T17:53:24.774161259Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" Mar 17 17:53:24.774274 containerd[1488]: time="2025-03-17T17:53:24.774256517Z" level=info msg="TearDown network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" successfully" Mar 17 17:53:24.774274 containerd[1488]: time="2025-03-17T17:53:24.774270143Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" returns successfully" Mar 17 17:53:24.775131 containerd[1488]: time="2025-03-17T17:53:24.774830465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:4,}" Mar 17 17:53:24.775413 kubelet[2671]: I0317 17:53:24.775314 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8" Mar 17 17:53:24.775942 containerd[1488]: time="2025-03-17T17:53:24.775898741Z" level=info msg="StopPodSandbox for \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\"" Mar 17 17:53:24.776122 containerd[1488]: time="2025-03-17T17:53:24.776094600Z" level=info msg="Ensure that sandbox 2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8 in task-service has been cleanup successfully" Mar 17 17:53:24.776778 containerd[1488]: time="2025-03-17T17:53:24.776638191Z" level=info msg="TearDown network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\" successfully" Mar 17 17:53:24.776778 containerd[1488]: time="2025-03-17T17:53:24.776660302Z" level=info msg="StopPodSandbox for \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\" returns successfully" Mar 17 17:53:24.777431 containerd[1488]: time="2025-03-17T17:53:24.777229952Z" level=info msg="StopPodSandbox for \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\"" Mar 17 17:53:24.777431 containerd[1488]: time="2025-03-17T17:53:24.777365837Z" level=info msg="TearDown network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" successfully" Mar 17 17:53:24.777431 containerd[1488]: time="2025-03-17T17:53:24.777376046Z" level=info msg="StopPodSandbox for \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" returns successfully" Mar 17 17:53:24.777806 containerd[1488]: time="2025-03-17T17:53:24.777765658Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\"" Mar 17 17:53:24.777869 containerd[1488]: time="2025-03-17T17:53:24.777851390Z" level=info msg="TearDown network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" successfully" Mar 17 17:53:24.777869 containerd[1488]: time="2025-03-17T17:53:24.777865596Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" returns successfully" Mar 17 17:53:24.778264 containerd[1488]: time="2025-03-17T17:53:24.778241332Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" Mar 17 17:53:24.778348 containerd[1488]: time="2025-03-17T17:53:24.778331110Z" level=info msg="TearDown network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" successfully" Mar 17 17:53:24.778399 containerd[1488]: time="2025-03-17T17:53:24.778347491Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" returns successfully" Mar 17 17:53:24.778627 kubelet[2671]: I0317 17:53:24.778610 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7" Mar 17 17:53:24.778853 systemd[1]: run-netns-cni\x2de68894c8\x2db577\x2dfef8\x2d403b\x2d7142fc01a489.mount: Deactivated successfully. Mar 17 17:53:24.779043 containerd[1488]: time="2025-03-17T17:53:24.779010396Z" level=info msg="StopPodSandbox for \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\"" Mar 17 17:53:24.779237 containerd[1488]: time="2025-03-17T17:53:24.779198760Z" level=info msg="Ensure that sandbox f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7 in task-service has been cleanup successfully" Mar 17 17:53:24.779472 containerd[1488]: time="2025-03-17T17:53:24.779441196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:53:24.780524 containerd[1488]: time="2025-03-17T17:53:24.780494173Z" level=info msg="TearDown network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\" successfully" Mar 17 17:53:24.780524 containerd[1488]: time="2025-03-17T17:53:24.780518368Z" level=info msg="StopPodSandbox for \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\" returns successfully" Mar 17 17:53:24.780838 containerd[1488]: time="2025-03-17T17:53:24.780809705Z" level=info msg="StopPodSandbox for \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\"" Mar 17 17:53:24.780927 containerd[1488]: time="2025-03-17T17:53:24.780902880Z" level=info msg="TearDown network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" successfully" Mar 17 17:53:24.780927 containerd[1488]: time="2025-03-17T17:53:24.780922777Z" level=info msg="StopPodSandbox for \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" returns successfully" Mar 17 17:53:24.781160 containerd[1488]: time="2025-03-17T17:53:24.781117914Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\"" Mar 17 17:53:24.781266 containerd[1488]: time="2025-03-17T17:53:24.781230726Z" level=info msg="TearDown network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" successfully" Mar 17 17:53:24.781266 containerd[1488]: time="2025-03-17T17:53:24.781243229Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" returns successfully" Mar 17 17:53:24.781534 containerd[1488]: time="2025-03-17T17:53:24.781513958Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" Mar 17 17:53:24.781657 containerd[1488]: time="2025-03-17T17:53:24.781641268Z" level=info msg="TearDown network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" successfully" Mar 17 17:53:24.781694 containerd[1488]: time="2025-03-17T17:53:24.781656707Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" returns successfully" Mar 17 17:53:24.782201 systemd[1]: run-netns-cni\x2d072af5ae\x2dfc0b\x2da200\x2d2431\x2d3741871978aa.mount: Deactivated successfully. Mar 17 17:53:24.782779 containerd[1488]: time="2025-03-17T17:53:24.782732787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:4,}" Mar 17 17:53:24.783644 kubelet[2671]: I0317 17:53:24.783229 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26" Mar 17 17:53:24.783727 containerd[1488]: time="2025-03-17T17:53:24.783697669Z" level=info msg="StopPodSandbox for \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\"" Mar 17 17:53:24.783945 containerd[1488]: time="2025-03-17T17:53:24.783904729Z" level=info msg="Ensure that sandbox addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26 in task-service has been cleanup successfully" Mar 17 17:53:24.784478 containerd[1488]: time="2025-03-17T17:53:24.784366906Z" level=info msg="TearDown network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\" successfully" Mar 17 17:53:24.784478 containerd[1488]: time="2025-03-17T17:53:24.784407873Z" level=info msg="StopPodSandbox for \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\" returns successfully" Mar 17 17:53:24.785962 containerd[1488]: time="2025-03-17T17:53:24.785897140Z" level=info msg="StopPodSandbox for \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\"" Mar 17 17:53:24.786084 containerd[1488]: time="2025-03-17T17:53:24.786059004Z" level=info msg="TearDown network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" successfully" Mar 17 17:53:24.786084 containerd[1488]: time="2025-03-17T17:53:24.786080264Z" level=info msg="StopPodSandbox for \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" returns successfully" Mar 17 17:53:24.786910 containerd[1488]: time="2025-03-17T17:53:24.786885276Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\"" Mar 17 17:53:24.787005 containerd[1488]: time="2025-03-17T17:53:24.786980795Z" level=info msg="TearDown network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" successfully" Mar 17 17:53:24.787005 containerd[1488]: time="2025-03-17T17:53:24.786997817Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" returns successfully" Mar 17 17:53:24.787707 containerd[1488]: time="2025-03-17T17:53:24.787674118Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" Mar 17 17:53:24.788243 containerd[1488]: time="2025-03-17T17:53:24.788184336Z" level=info msg="TearDown network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" successfully" Mar 17 17:53:24.788243 containerd[1488]: time="2025-03-17T17:53:24.788220333Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" returns successfully" Mar 17 17:53:24.788530 kubelet[2671]: E0317 17:53:24.788499 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:24.790310 containerd[1488]: time="2025-03-17T17:53:24.789930285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:4,}" Mar 17 17:53:24.791258 kubelet[2671]: I0317 17:53:24.791117 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-m8g62" podStartSLOduration=1.95890849 podStartE2EDuration="22.791098769s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="2025-03-17 17:53:03.273414918 +0000 UTC m=+19.789420324" lastFinishedPulling="2025-03-17 17:53:24.105605197 +0000 UTC m=+40.621610603" observedRunningTime="2025-03-17 17:53:24.790000177 +0000 UTC m=+41.306005673" watchObservedRunningTime="2025-03-17 17:53:24.791098769 +0000 UTC m=+41.307104175" Mar 17 17:53:24.791397 kubelet[2671]: I0317 17:53:24.791299 2671 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a" Mar 17 17:53:24.791926 containerd[1488]: time="2025-03-17T17:53:24.791897099Z" level=info msg="StopPodSandbox for \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\"" Mar 17 17:53:24.793642 containerd[1488]: time="2025-03-17T17:53:24.793605277Z" level=info msg="Ensure that sandbox ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a in task-service has been cleanup successfully" Mar 17 17:53:24.793880 containerd[1488]: time="2025-03-17T17:53:24.793837554Z" level=info msg="TearDown network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\" successfully" Mar 17 17:53:24.793880 containerd[1488]: time="2025-03-17T17:53:24.793854846Z" level=info msg="StopPodSandbox for \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\" returns successfully" Mar 17 17:53:24.794841 containerd[1488]: time="2025-03-17T17:53:24.794816071Z" level=info msg="StopPodSandbox for \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\"" Mar 17 17:53:24.795040 containerd[1488]: time="2025-03-17T17:53:24.795020315Z" level=info msg="TearDown network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" successfully" Mar 17 17:53:24.795116 containerd[1488]: time="2025-03-17T17:53:24.795037848Z" level=info msg="StopPodSandbox for \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" returns successfully" Mar 17 17:53:24.795430 containerd[1488]: time="2025-03-17T17:53:24.795405969Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\"" Mar 17 17:53:24.795517 containerd[1488]: time="2025-03-17T17:53:24.795499986Z" level=info msg="TearDown network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" successfully" Mar 17 17:53:24.795557 containerd[1488]: time="2025-03-17T17:53:24.795515795Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" returns successfully" Mar 17 17:53:24.795962 containerd[1488]: time="2025-03-17T17:53:24.795921407Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" Mar 17 17:53:24.796281 containerd[1488]: time="2025-03-17T17:53:24.796076078Z" level=info msg="TearDown network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" successfully" Mar 17 17:53:24.796281 containerd[1488]: time="2025-03-17T17:53:24.796098520Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" returns successfully" Mar 17 17:53:24.796504 kubelet[2671]: E0317 17:53:24.796480 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:24.796888 containerd[1488]: time="2025-03-17T17:53:24.796863466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:4,}" Mar 17 17:53:25.180470 systemd-networkd[1412]: cali17bdb97db6f: Link UP Mar 17 17:53:25.182247 systemd-networkd[1412]: cali17bdb97db6f: Gained carrier Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:24.987 [INFO][4486] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.007 [INFO][4486] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0 coredns-7db6d8ff4d- kube-system a4788825-63ac-4824-9f76-78d12b4feedf 759 0 2025-03-17 17:52:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-z8gdg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali17bdb97db6f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.007 [INFO][4486] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.068 [INFO][4546] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" HandleID="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Workload="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.093 [INFO][4546] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" HandleID="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Workload="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004379b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-z8gdg", "timestamp":"2025-03-17 17:53:25.068040972 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.093 [INFO][4546] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.093 [INFO][4546] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.093 [INFO][4546] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.094 [INFO][4546] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.118 [INFO][4546] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.124 [INFO][4546] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.126 [INFO][4546] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.128 [INFO][4546] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.128 [INFO][4546] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.129 [INFO][4546] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.157 [INFO][4546] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.165 [INFO][4546] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.165 [INFO][4546] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" host="localhost" Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.165 [INFO][4546] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:25.197354 containerd[1488]: 2025-03-17 17:53:25.165 [INFO][4546] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" HandleID="k8s-pod-network.b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Workload="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" Mar 17 17:53:25.199192 containerd[1488]: 2025-03-17 17:53:25.168 [INFO][4486] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a4788825-63ac-4824-9f76-78d12b4feedf", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-z8gdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17bdb97db6f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.199192 containerd[1488]: 2025-03-17 17:53:25.169 [INFO][4486] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" Mar 17 17:53:25.199192 containerd[1488]: 2025-03-17 17:53:25.169 [INFO][4486] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17bdb97db6f ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" Mar 17 17:53:25.199192 containerd[1488]: 2025-03-17 17:53:25.182 [INFO][4486] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" Mar 17 17:53:25.199192 containerd[1488]: 2025-03-17 17:53:25.182 [INFO][4486] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a4788825-63ac-4824-9f76-78d12b4feedf", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de", Pod:"coredns-7db6d8ff4d-z8gdg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali17bdb97db6f", MAC:"2a:83:98:bd:91:6e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.199192 containerd[1488]: 2025-03-17 17:53:25.194 [INFO][4486] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de" Namespace="kube-system" Pod="coredns-7db6d8ff4d-z8gdg" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--z8gdg-eth0" Mar 17 17:53:25.205485 systemd-networkd[1412]: califffd09dbee2: Link UP Mar 17 17:53:25.206317 systemd-networkd[1412]: califffd09dbee2: Gained carrier Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:24.870 [INFO][4441] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:24.929 [INFO][4441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qsx6f-eth0 csi-node-driver- calico-system 0a462e4e-6356-4301-958a-de536414d963 616 0 2025-03-17 17:53:02 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qsx6f eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califffd09dbee2 [] []}} ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:24.929 [INFO][4441] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-eth0" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.055 [INFO][4458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" HandleID="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Workload="localhost-k8s-csi--node--driver--qsx6f-eth0" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.094 [INFO][4458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" HandleID="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Workload="localhost-k8s-csi--node--driver--qsx6f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00070a960), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qsx6f", "timestamp":"2025-03-17 17:53:25.055744891 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.165 [INFO][4458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.165 [INFO][4458] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.168 [INFO][4458] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.172 [INFO][4458] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.177 [INFO][4458] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.179 [INFO][4458] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.184 [INFO][4458] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.184 [INFO][4458] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.185 [INFO][4458] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5 Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.190 [INFO][4458] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.196 [INFO][4458] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.196 [INFO][4458] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" host="localhost" Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.196 [INFO][4458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:25.225281 containerd[1488]: 2025-03-17 17:53:25.196 [INFO][4458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" HandleID="k8s-pod-network.5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Workload="localhost-k8s-csi--node--driver--qsx6f-eth0" Mar 17 17:53:25.226078 containerd[1488]: 2025-03-17 17:53:25.200 [INFO][4441] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qsx6f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a462e4e-6356-4301-958a-de536414d963", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qsx6f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califffd09dbee2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.226078 containerd[1488]: 2025-03-17 17:53:25.201 [INFO][4441] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-eth0" Mar 17 17:53:25.226078 containerd[1488]: 2025-03-17 17:53:25.201 [INFO][4441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califffd09dbee2 ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-eth0" Mar 17 17:53:25.226078 containerd[1488]: 2025-03-17 17:53:25.206 [INFO][4441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-eth0" Mar 17 17:53:25.226078 containerd[1488]: 2025-03-17 17:53:25.208 [INFO][4441] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qsx6f-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a462e4e-6356-4301-958a-de536414d963", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5", Pod:"csi-node-driver-qsx6f", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califffd09dbee2", MAC:"ae:d9:63:00:d4:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.226078 containerd[1488]: 2025-03-17 17:53:25.221 [INFO][4441] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5" Namespace="calico-system" Pod="csi-node-driver-qsx6f" WorkloadEndpoint="localhost-k8s-csi--node--driver--qsx6f-eth0" Mar 17 17:53:25.240080 systemd-networkd[1412]: calidf5d17134c3: Link UP Mar 17 17:53:25.240656 systemd-networkd[1412]: calidf5d17134c3: Gained carrier Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:24.982 [INFO][4494] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:24.997 [INFO][4494] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0 coredns-7db6d8ff4d- kube-system 61c49c7a-fa99-4721-add6-729b5d3cd3c9 766 0 2025-03-17 17:52:57 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-48wgq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidf5d17134c3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:24.997 [INFO][4494] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.083 [INFO][4532] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" HandleID="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Workload="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4532] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" HandleID="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Workload="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e3d60), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-48wgq", "timestamp":"2025-03-17 17:53:25.083401329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4532] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.196 [INFO][4532] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.196 [INFO][4532] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.200 [INFO][4532] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.206 [INFO][4532] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.211 [INFO][4532] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.212 [INFO][4532] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.214 [INFO][4532] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.214 [INFO][4532] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.217 [INFO][4532] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034 Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.223 [INFO][4532] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.229 [INFO][4532] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.230 [INFO][4532] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" host="localhost" Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.230 [INFO][4532] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:25.257522 containerd[1488]: 2025-03-17 17:53:25.230 [INFO][4532] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" HandleID="k8s-pod-network.53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Workload="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" Mar 17 17:53:25.258306 containerd[1488]: 2025-03-17 17:53:25.234 [INFO][4494] cni-plugin/k8s.go 386: Populated endpoint ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"61c49c7a-fa99-4721-add6-729b5d3cd3c9", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-48wgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf5d17134c3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.258306 containerd[1488]: 2025-03-17 17:53:25.235 [INFO][4494] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" Mar 17 17:53:25.258306 containerd[1488]: 2025-03-17 17:53:25.235 [INFO][4494] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf5d17134c3 ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" Mar 17 17:53:25.258306 containerd[1488]: 2025-03-17 17:53:25.239 [INFO][4494] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" Mar 17 17:53:25.258306 containerd[1488]: 2025-03-17 17:53:25.240 [INFO][4494] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"61c49c7a-fa99-4721-add6-729b5d3cd3c9", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 52, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034", Pod:"coredns-7db6d8ff4d-48wgq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidf5d17134c3", MAC:"86:a4:e9:a7:7d:33", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.258306 containerd[1488]: 2025-03-17 17:53:25.255 [INFO][4494] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034" Namespace="kube-system" Pod="coredns-7db6d8ff4d-48wgq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--48wgq-eth0" Mar 17 17:53:25.321790 systemd-networkd[1412]: cali472108079d4: Link UP Mar 17 17:53:25.322038 systemd-networkd[1412]: cali472108079d4: Gained carrier Mar 17 17:53:25.479254 systemd[1]: Started sshd@9-10.0.0.117:22-10.0.0.1:45380.service - OpenSSH per-connection server daemon (10.0.0.1:45380). Mar 17 17:53:25.502221 systemd-networkd[1412]: calicc9fffc3806: Link UP Mar 17 17:53:25.502408 systemd-networkd[1412]: calicc9fffc3806: Gained carrier Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:24.995 [INFO][4462] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.014 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0 calico-apiserver-69489b6559- calico-apiserver bd182690-c4b5-49c8-a36b-6df7c7b0c5b6 761 0 2025-03-17 17:53:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69489b6559 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69489b6559-jrsd6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicc9fffc3806 [] []}} ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.014 [INFO][4462] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.083 [INFO][4540] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" HandleID="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Workload="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4540] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" HandleID="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Workload="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c2fc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69489b6559-jrsd6", "timestamp":"2025-03-17 17:53:25.083115091 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4540] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.317 [INFO][4540] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.317 [INFO][4540] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.318 [INFO][4540] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.323 [INFO][4540] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.327 [INFO][4540] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.328 [INFO][4540] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.332 [INFO][4540] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.332 [INFO][4540] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.333 [INFO][4540] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052 Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.363 [INFO][4540] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.496 [INFO][4540] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.496 [INFO][4540] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" host="localhost" Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.496 [INFO][4540] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:25.547823 containerd[1488]: 2025-03-17 17:53:25.496 [INFO][4540] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" HandleID="k8s-pod-network.1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Workload="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" Mar 17 17:53:25.548417 containerd[1488]: 2025-03-17 17:53:25.500 [INFO][4462] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0", GenerateName:"calico-apiserver-69489b6559-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd182690-c4b5-49c8-a36b-6df7c7b0c5b6", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69489b6559", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69489b6559-jrsd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc9fffc3806", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.548417 containerd[1488]: 2025-03-17 17:53:25.500 [INFO][4462] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" Mar 17 17:53:25.548417 containerd[1488]: 2025-03-17 17:53:25.500 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc9fffc3806 ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" Mar 17 17:53:25.548417 containerd[1488]: 2025-03-17 17:53:25.502 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" Mar 17 17:53:25.548417 containerd[1488]: 2025-03-17 17:53:25.504 [INFO][4462] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0", GenerateName:"calico-apiserver-69489b6559-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd182690-c4b5-49c8-a36b-6df7c7b0c5b6", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69489b6559", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052", Pod:"calico-apiserver-69489b6559-jrsd6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicc9fffc3806", MAC:"1e:ff:ac:da:6c:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.548417 containerd[1488]: 2025-03-17 17:53:25.544 [INFO][4462] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-jrsd6" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--jrsd6-eth0" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:24.827 [INFO][4427] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:24.857 [INFO][4427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0 calico-kube-controllers-766ddbd77b- calico-system ecb46531-5f4a-4235-a229-80c727efd6de 765 0 2025-03-17 17:53:03 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:766ddbd77b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-766ddbd77b-gscbd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali472108079d4 [] []}} ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:24.859 [INFO][4427] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.055 [INFO][4472] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" HandleID="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Workload="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.094 [INFO][4472] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" HandleID="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Workload="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003808b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-766ddbd77b-gscbd", "timestamp":"2025-03-17 17:53:25.055837916 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4472] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.231 [INFO][4472] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.231 [INFO][4472] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.234 [INFO][4472] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.245 [INFO][4472] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.255 [INFO][4472] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.284 [INFO][4472] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.286 [INFO][4472] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.286 [INFO][4472] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.287 [INFO][4472] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2 Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.294 [INFO][4472] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.317 [INFO][4472] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.317 [INFO][4472] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" host="localhost" Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.317 [INFO][4472] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:25.624039 containerd[1488]: 2025-03-17 17:53:25.317 [INFO][4472] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" HandleID="k8s-pod-network.c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Workload="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" Mar 17 17:53:25.625472 containerd[1488]: 2025-03-17 17:53:25.319 [INFO][4427] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0", GenerateName:"calico-kube-controllers-766ddbd77b-", Namespace:"calico-system", SelfLink:"", UID:"ecb46531-5f4a-4235-a229-80c727efd6de", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766ddbd77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-766ddbd77b-gscbd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali472108079d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.625472 containerd[1488]: 2025-03-17 17:53:25.320 [INFO][4427] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" Mar 17 17:53:25.625472 containerd[1488]: 2025-03-17 17:53:25.320 [INFO][4427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali472108079d4 ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" Mar 17 17:53:25.625472 containerd[1488]: 2025-03-17 17:53:25.322 [INFO][4427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" Mar 17 17:53:25.625472 containerd[1488]: 2025-03-17 17:53:25.323 [INFO][4427] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0", GenerateName:"calico-kube-controllers-766ddbd77b-", Namespace:"calico-system", SelfLink:"", UID:"ecb46531-5f4a-4235-a229-80c727efd6de", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"766ddbd77b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2", Pod:"calico-kube-controllers-766ddbd77b-gscbd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali472108079d4", MAC:"36:06:74:1c:44:b6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.625472 containerd[1488]: 2025-03-17 17:53:25.620 [INFO][4427] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2" Namespace="calico-system" Pod="calico-kube-controllers-766ddbd77b-gscbd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--766ddbd77b--gscbd-eth0" Mar 17 17:53:25.643371 containerd[1488]: time="2025-03-17T17:53:25.643053443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:25.643371 containerd[1488]: time="2025-03-17T17:53:25.643129236Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:25.643371 containerd[1488]: time="2025-03-17T17:53:25.643147230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.643371 containerd[1488]: time="2025-03-17T17:53:25.643247799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.648747 containerd[1488]: time="2025-03-17T17:53:25.647525983Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:25.648747 containerd[1488]: time="2025-03-17T17:53:25.647681234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:25.648747 containerd[1488]: time="2025-03-17T17:53:25.647737570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.648747 containerd[1488]: time="2025-03-17T17:53:25.647939760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.648525 systemd-networkd[1412]: cali198cd8366e2: Link UP Mar 17 17:53:25.650320 systemd-networkd[1412]: cali198cd8366e2: Gained carrier Mar 17 17:53:25.654210 containerd[1488]: time="2025-03-17T17:53:25.653301029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:25.654210 containerd[1488]: time="2025-03-17T17:53:25.653362043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:25.654210 containerd[1488]: time="2025-03-17T17:53:25.653993409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.654210 containerd[1488]: time="2025-03-17T17:53:25.654097895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.662825 sshd[4605]: Accepted publickey for core from 10.0.0.1 port 45380 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:25.664886 sshd-session[4605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:25.676033 systemd-logind[1472]: New session 10 of user core. Mar 17 17:53:25.681503 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:24.986 [INFO][4460] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.008 [INFO][4460] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69489b6559--dszht-eth0 calico-apiserver-69489b6559- calico-apiserver 1c0a54b3-f431-4a56-aa22-15462ea1a4c4 763 0 2025-03-17 17:53:02 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69489b6559 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69489b6559-dszht eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali198cd8366e2 [] []}} ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.008 [INFO][4460] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.077 [INFO][4553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" HandleID="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Workload="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" HandleID="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Workload="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002647f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69489b6559-dszht", "timestamp":"2025-03-17 17:53:25.077644348 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.095 [INFO][4553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.496 [INFO][4553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.496 [INFO][4553] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.499 [INFO][4553] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.503 [INFO][4553] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.507 [INFO][4553] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.509 [INFO][4553] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.544 [INFO][4553] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.544 [INFO][4553] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.546 [INFO][4553] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.578 [INFO][4553] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.620 [INFO][4553] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.620 [INFO][4553] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" host="localhost" Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.620 [INFO][4553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:53:25.682158 containerd[1488]: 2025-03-17 17:53:25.620 [INFO][4553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" HandleID="k8s-pod-network.2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Workload="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" Mar 17 17:53:25.684224 containerd[1488]: 2025-03-17 17:53:25.631 [INFO][4460] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69489b6559--dszht-eth0", GenerateName:"calico-apiserver-69489b6559-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c0a54b3-f431-4a56-aa22-15462ea1a4c4", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69489b6559", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69489b6559-dszht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali198cd8366e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.684224 containerd[1488]: 2025-03-17 17:53:25.637 [INFO][4460] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" Mar 17 17:53:25.684224 containerd[1488]: 2025-03-17 17:53:25.641 [INFO][4460] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali198cd8366e2 ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" Mar 17 17:53:25.684224 containerd[1488]: 2025-03-17 17:53:25.650 [INFO][4460] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" Mar 17 17:53:25.684224 containerd[1488]: 2025-03-17 17:53:25.654 [INFO][4460] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69489b6559--dszht-eth0", GenerateName:"calico-apiserver-69489b6559-", Namespace:"calico-apiserver", SelfLink:"", UID:"1c0a54b3-f431-4a56-aa22-15462ea1a4c4", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 53, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69489b6559", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b", Pod:"calico-apiserver-69489b6559-dszht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali198cd8366e2", MAC:"fe:fa:5c:e8:27:49", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:53:25.684224 containerd[1488]: 2025-03-17 17:53:25.670 [INFO][4460] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b" Namespace="calico-apiserver" Pod="calico-apiserver-69489b6559-dszht" WorkloadEndpoint="localhost-k8s-calico--apiserver--69489b6559--dszht-eth0" Mar 17 17:53:25.688078 containerd[1488]: time="2025-03-17T17:53:25.685810238Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:25.688078 containerd[1488]: time="2025-03-17T17:53:25.685889378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:25.688078 containerd[1488]: time="2025-03-17T17:53:25.685909365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.688078 containerd[1488]: time="2025-03-17T17:53:25.686017077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.691038 containerd[1488]: time="2025-03-17T17:53:25.690647974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:25.691038 containerd[1488]: time="2025-03-17T17:53:25.690727002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:25.691038 containerd[1488]: time="2025-03-17T17:53:25.690921607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.692802 containerd[1488]: time="2025-03-17T17:53:25.692703504Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.696021 systemd[1]: Started cri-containerd-5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5.scope - libcontainer container 5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5. Mar 17 17:53:25.697693 systemd[1]: Started cri-containerd-b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de.scope - libcontainer container b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de. Mar 17 17:53:25.704533 systemd[1]: Started cri-containerd-53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034.scope - libcontainer container 53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034. Mar 17 17:53:25.712573 systemd[1]: Started cri-containerd-1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052.scope - libcontainer container 1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052. Mar 17 17:53:25.725889 containerd[1488]: time="2025-03-17T17:53:25.725706912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:53:25.726470 containerd[1488]: time="2025-03-17T17:53:25.725918460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:53:25.726874 containerd[1488]: time="2025-03-17T17:53:25.726651576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.726874 containerd[1488]: time="2025-03-17T17:53:25.726728941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:53:25.735686 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:53:25.740404 systemd[1]: Started cri-containerd-c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2.scope - libcontainer container c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2. Mar 17 17:53:25.755471 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:53:25.765004 systemd[1]: Started cri-containerd-2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b.scope - libcontainer container 2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b. Mar 17 17:53:25.770637 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:53:25.811520 systemd[1]: run-netns-cni\x2d5f1e32f9\x2d586a\x2dbb9b\x2d9c5b\x2d11d76fc86303.mount: Deactivated successfully. Mar 17 17:53:25.811674 systemd[1]: run-netns-cni\x2db1bc49ef\x2dd3fa\x2d688e\x2dd40d\x2deb66ce7478b7.mount: Deactivated successfully. Mar 17 17:53:25.825903 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:53:25.843518 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:53:25.863896 containerd[1488]: time="2025-03-17T17:53:25.863444766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-48wgq,Uid:61c49c7a-fa99-4721-add6-729b5d3cd3c9,Namespace:kube-system,Attempt:4,} returns sandbox id \"53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034\"" Mar 17 17:53:25.868916 kubelet[2671]: E0317 17:53:25.867339 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:25.885391 containerd[1488]: time="2025-03-17T17:53:25.885037957Z" level=info msg="CreateContainer within sandbox \"53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:53:25.887516 systemd-resolved[1327]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 17:53:25.890222 containerd[1488]: time="2025-03-17T17:53:25.888997092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsx6f,Uid:0a462e4e-6356-4301-958a-de536414d963,Namespace:calico-system,Attempt:4,} returns sandbox id \"5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5\"" Mar 17 17:53:25.896175 containerd[1488]: time="2025-03-17T17:53:25.896129426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-z8gdg,Uid:a4788825-63ac-4824-9f76-78d12b4feedf,Namespace:kube-system,Attempt:4,} returns sandbox id \"b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de\"" Mar 17 17:53:25.908570 kubelet[2671]: E0317 17:53:25.908271 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:25.910825 containerd[1488]: time="2025-03-17T17:53:25.910007188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:53:25.923721 containerd[1488]: time="2025-03-17T17:53:25.923335207Z" level=info msg="CreateContainer within sandbox \"b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 17:53:25.931143 containerd[1488]: time="2025-03-17T17:53:25.931041139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-jrsd6,Uid:bd182690-c4b5-49c8-a36b-6df7c7b0c5b6,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052\"" Mar 17 17:53:25.939771 sshd[4746]: Connection closed by 10.0.0.1 port 45380 Mar 17 17:53:25.940240 sshd-session[4605]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:25.952056 systemd[1]: sshd@9-10.0.0.117:22-10.0.0.1:45380.service: Deactivated successfully. Mar 17 17:53:25.960444 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 17:53:25.963201 systemd-logind[1472]: Session 10 logged out. Waiting for processes to exit. Mar 17 17:53:25.971623 containerd[1488]: time="2025-03-17T17:53:25.971583394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-766ddbd77b-gscbd,Uid:ecb46531-5f4a-4235-a229-80c727efd6de,Namespace:calico-system,Attempt:4,} returns sandbox id \"c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2\"" Mar 17 17:53:25.989505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1420953952.mount: Deactivated successfully. Mar 17 17:53:25.990300 containerd[1488]: time="2025-03-17T17:53:25.987621483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69489b6559-dszht,Uid:1c0a54b3-f431-4a56-aa22-15462ea1a4c4,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b\"" Mar 17 17:53:25.993143 systemd-logind[1472]: Removed session 10. Mar 17 17:53:26.001073 containerd[1488]: time="2025-03-17T17:53:26.000929745Z" level=info msg="CreateContainer within sandbox \"53984463a79a7150935b991c77d5148267adae7c9255aa9f42fcf3a52c3a8034\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"937a346f770def90297165bcec8b17c84c40d95bffc71950624d05f8ee8a0d3e\"" Mar 17 17:53:26.002896 containerd[1488]: time="2025-03-17T17:53:26.001876162Z" level=info msg="StartContainer for \"937a346f770def90297165bcec8b17c84c40d95bffc71950624d05f8ee8a0d3e\"" Mar 17 17:53:26.010058 containerd[1488]: time="2025-03-17T17:53:26.010014775Z" level=info msg="CreateContainer within sandbox \"b9dc259b0ea138256ffc554b96e1884e672aa42407b18f87fcd3bb12421ef5de\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"adf2d8c566bb39c6c238fbfaf03b5bc0c2f32c7470c31d9a09d93bf226999e93\"" Mar 17 17:53:26.010893 containerd[1488]: time="2025-03-17T17:53:26.010782586Z" level=info msg="StartContainer for \"adf2d8c566bb39c6c238fbfaf03b5bc0c2f32c7470c31d9a09d93bf226999e93\"" Mar 17 17:53:26.067990 systemd[1]: Started cri-containerd-937a346f770def90297165bcec8b17c84c40d95bffc71950624d05f8ee8a0d3e.scope - libcontainer container 937a346f770def90297165bcec8b17c84c40d95bffc71950624d05f8ee8a0d3e. Mar 17 17:53:26.070310 systemd[1]: Started cri-containerd-adf2d8c566bb39c6c238fbfaf03b5bc0c2f32c7470c31d9a09d93bf226999e93.scope - libcontainer container adf2d8c566bb39c6c238fbfaf03b5bc0c2f32c7470c31d9a09d93bf226999e93. Mar 17 17:53:26.126848 containerd[1488]: time="2025-03-17T17:53:26.126700953Z" level=info msg="StartContainer for \"937a346f770def90297165bcec8b17c84c40d95bffc71950624d05f8ee8a0d3e\" returns successfully" Mar 17 17:53:26.127408 containerd[1488]: time="2025-03-17T17:53:26.127109861Z" level=info msg="StartContainer for \"adf2d8c566bb39c6c238fbfaf03b5bc0c2f32c7470c31d9a09d93bf226999e93\" returns successfully" Mar 17 17:53:26.151857 kernel: bpftool[5082]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:53:26.352003 systemd-networkd[1412]: cali17bdb97db6f: Gained IPv6LL Mar 17 17:53:26.402844 systemd-networkd[1412]: vxlan.calico: Link UP Mar 17 17:53:26.402856 systemd-networkd[1412]: vxlan.calico: Gained carrier Mar 17 17:53:26.736922 systemd-networkd[1412]: cali472108079d4: Gained IPv6LL Mar 17 17:53:26.768337 systemd[1]: run-containerd-runc-k8s.io-adf2d8c566bb39c6c238fbfaf03b5bc0c2f32c7470c31d9a09d93bf226999e93-runc.FgNfYn.mount: Deactivated successfully. Mar 17 17:53:26.812641 kubelet[2671]: E0317 17:53:26.812446 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:26.815087 kubelet[2671]: E0317 17:53:26.815054 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:26.863939 systemd-networkd[1412]: calicc9fffc3806: Gained IPv6LL Mar 17 17:53:26.928908 systemd-networkd[1412]: califffd09dbee2: Gained IPv6LL Mar 17 17:53:27.083774 kubelet[2671]: I0317 17:53:27.083591 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-z8gdg" podStartSLOduration=30.083571662 podStartE2EDuration="30.083571662s" podCreationTimestamp="2025-03-17 17:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:53:27.083117329 +0000 UTC m=+43.599122745" watchObservedRunningTime="2025-03-17 17:53:27.083571662 +0000 UTC m=+43.599577068" Mar 17 17:53:27.248013 systemd-networkd[1412]: calidf5d17134c3: Gained IPv6LL Mar 17 17:53:27.333564 kubelet[2671]: I0317 17:53:27.333437 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-48wgq" podStartSLOduration=30.3334181 podStartE2EDuration="30.3334181s" podCreationTimestamp="2025-03-17 17:52:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 17:53:27.332969969 +0000 UTC m=+43.848975395" watchObservedRunningTime="2025-03-17 17:53:27.3334181 +0000 UTC m=+43.849423506" Mar 17 17:53:27.568007 systemd-networkd[1412]: cali198cd8366e2: Gained IPv6LL Mar 17 17:53:27.818586 kubelet[2671]: E0317 17:53:27.818435 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:27.818586 kubelet[2671]: E0317 17:53:27.818547 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:28.078476 containerd[1488]: time="2025-03-17T17:53:28.078302943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:28.079321 containerd[1488]: time="2025-03-17T17:53:28.079287442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:53:28.080585 containerd[1488]: time="2025-03-17T17:53:28.080530526Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:28.082798 containerd[1488]: time="2025-03-17T17:53:28.082765111Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:28.083583 containerd[1488]: time="2025-03-17T17:53:28.083537631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 2.17349134s" Mar 17 17:53:28.083583 containerd[1488]: time="2025-03-17T17:53:28.083568770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:53:28.084928 containerd[1488]: time="2025-03-17T17:53:28.084873519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:53:28.086184 containerd[1488]: time="2025-03-17T17:53:28.086148292Z" level=info msg="CreateContainer within sandbox \"5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:53:28.144096 systemd-networkd[1412]: vxlan.calico: Gained IPv6LL Mar 17 17:53:28.173441 containerd[1488]: time="2025-03-17T17:53:28.173369433Z" level=info msg="CreateContainer within sandbox \"5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c2d9a2cae71af3b63212159654cb1ef54b2ae54a9a58a82a53b233221ac73f7e\"" Mar 17 17:53:28.174073 containerd[1488]: time="2025-03-17T17:53:28.174026307Z" level=info msg="StartContainer for \"c2d9a2cae71af3b63212159654cb1ef54b2ae54a9a58a82a53b233221ac73f7e\"" Mar 17 17:53:28.208917 systemd[1]: Started cri-containerd-c2d9a2cae71af3b63212159654cb1ef54b2ae54a9a58a82a53b233221ac73f7e.scope - libcontainer container c2d9a2cae71af3b63212159654cb1ef54b2ae54a9a58a82a53b233221ac73f7e. Mar 17 17:53:28.220263 kubelet[2671]: I0317 17:53:28.220215 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:53:28.221176 kubelet[2671]: E0317 17:53:28.220919 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:28.258134 containerd[1488]: time="2025-03-17T17:53:28.258057750Z" level=info msg="StartContainer for \"c2d9a2cae71af3b63212159654cb1ef54b2ae54a9a58a82a53b233221ac73f7e\" returns successfully" Mar 17 17:53:28.822397 kubelet[2671]: E0317 17:53:28.822355 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:28.822574 kubelet[2671]: E0317 17:53:28.822535 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:28.822636 kubelet[2671]: E0317 17:53:28.822618 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:53:30.751316 containerd[1488]: time="2025-03-17T17:53:30.751252493Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:30.760311 containerd[1488]: time="2025-03-17T17:53:30.760263699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 17 17:53:30.791438 containerd[1488]: time="2025-03-17T17:53:30.791360472Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:30.868665 containerd[1488]: time="2025-03-17T17:53:30.868592436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:30.869322 containerd[1488]: time="2025-03-17T17:53:30.869288222Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 2.78437599s" Mar 17 17:53:30.869381 containerd[1488]: time="2025-03-17T17:53:30.869320643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:53:30.870204 containerd[1488]: time="2025-03-17T17:53:30.870177992Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 17 17:53:30.871393 containerd[1488]: time="2025-03-17T17:53:30.871358599Z" level=info msg="CreateContainer within sandbox \"1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:53:30.954213 systemd[1]: Started sshd@10-10.0.0.117:22-10.0.0.1:59990.service - OpenSSH per-connection server daemon (10.0.0.1:59990). Mar 17 17:53:31.007123 sshd[5281]: Accepted publickey for core from 10.0.0.1 port 59990 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:31.009639 sshd-session[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:31.017193 containerd[1488]: time="2025-03-17T17:53:31.017135649Z" level=info msg="CreateContainer within sandbox \"1572f512d01a7ba228f6bc0de85f473d75b7bb6e35e889ab63650c3b7c417052\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"af16924c0ba4c6a0c793a56ac07c6325aa5b92b6e0828c8e7b9c25163dc7e820\"" Mar 17 17:53:31.017809 containerd[1488]: time="2025-03-17T17:53:31.017781742Z" level=info msg="StartContainer for \"af16924c0ba4c6a0c793a56ac07c6325aa5b92b6e0828c8e7b9c25163dc7e820\"" Mar 17 17:53:31.019420 systemd-logind[1472]: New session 11 of user core. Mar 17 17:53:31.025013 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 17 17:53:31.050901 systemd[1]: Started cri-containerd-af16924c0ba4c6a0c793a56ac07c6325aa5b92b6e0828c8e7b9c25163dc7e820.scope - libcontainer container af16924c0ba4c6a0c793a56ac07c6325aa5b92b6e0828c8e7b9c25163dc7e820. Mar 17 17:53:31.097774 containerd[1488]: time="2025-03-17T17:53:31.097698796Z" level=info msg="StartContainer for \"af16924c0ba4c6a0c793a56ac07c6325aa5b92b6e0828c8e7b9c25163dc7e820\" returns successfully" Mar 17 17:53:31.164106 sshd[5291]: Connection closed by 10.0.0.1 port 59990 Mar 17 17:53:31.164974 sshd-session[5281]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:31.169285 systemd[1]: sshd@10-10.0.0.117:22-10.0.0.1:59990.service: Deactivated successfully. Mar 17 17:53:31.171292 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 17:53:31.172375 systemd-logind[1472]: Session 11 logged out. Waiting for processes to exit. Mar 17 17:53:31.173366 systemd-logind[1472]: Removed session 11. Mar 17 17:53:32.831135 kubelet[2671]: I0317 17:53:32.831096 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:53:34.191152 containerd[1488]: time="2025-03-17T17:53:34.191024238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:34.193175 containerd[1488]: time="2025-03-17T17:53:34.193102388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 17 17:53:34.194883 containerd[1488]: time="2025-03-17T17:53:34.194806526Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:34.198183 containerd[1488]: time="2025-03-17T17:53:34.198147466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:34.198836 containerd[1488]: time="2025-03-17T17:53:34.198789702Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 3.328578667s" Mar 17 17:53:34.198887 containerd[1488]: time="2025-03-17T17:53:34.198835517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 17 17:53:34.200110 containerd[1488]: time="2025-03-17T17:53:34.200080313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 17 17:53:34.208860 containerd[1488]: time="2025-03-17T17:53:34.208804595Z" level=info msg="CreateContainer within sandbox \"c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 17:53:34.236127 containerd[1488]: time="2025-03-17T17:53:34.236075353Z" level=info msg="CreateContainer within sandbox \"c96058a82157a33a2b005cc46aa76a55c4bad3f1ac915b988d8c55ec372406a2\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5ad506229ae3741a4c7bbf10fb2e3f18f645122f12ccf9e5587db92168eceb16\"" Mar 17 17:53:34.236670 containerd[1488]: time="2025-03-17T17:53:34.236643960Z" level=info msg="StartContainer for \"5ad506229ae3741a4c7bbf10fb2e3f18f645122f12ccf9e5587db92168eceb16\"" Mar 17 17:53:34.270063 systemd[1]: Started cri-containerd-5ad506229ae3741a4c7bbf10fb2e3f18f645122f12ccf9e5587db92168eceb16.scope - libcontainer container 5ad506229ae3741a4c7bbf10fb2e3f18f645122f12ccf9e5587db92168eceb16. Mar 17 17:53:34.324713 containerd[1488]: time="2025-03-17T17:53:34.324665425Z" level=info msg="StartContainer for \"5ad506229ae3741a4c7bbf10fb2e3f18f645122f12ccf9e5587db92168eceb16\" returns successfully" Mar 17 17:53:34.670253 containerd[1488]: time="2025-03-17T17:53:34.670175866Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:34.671007 containerd[1488]: time="2025-03-17T17:53:34.670959667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 17 17:53:34.674410 containerd[1488]: time="2025-03-17T17:53:34.674355921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 474.233659ms" Mar 17 17:53:34.675477 containerd[1488]: time="2025-03-17T17:53:34.674514999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 17 17:53:34.677435 containerd[1488]: time="2025-03-17T17:53:34.677196191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:53:34.678473 containerd[1488]: time="2025-03-17T17:53:34.678436980Z" level=info msg="CreateContainer within sandbox \"2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 17:53:34.714776 containerd[1488]: time="2025-03-17T17:53:34.714692909Z" level=info msg="CreateContainer within sandbox \"2607a5fd2a9b2d3c22c00add038853f5b50f6d311fcf249732c5ada2b72f772b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d626843ddcef0f249ffd88150302d31060667412e9789fbbe56d10c0dddb91cd\"" Mar 17 17:53:34.715452 containerd[1488]: time="2025-03-17T17:53:34.715373737Z" level=info msg="StartContainer for \"d626843ddcef0f249ffd88150302d31060667412e9789fbbe56d10c0dddb91cd\"" Mar 17 17:53:34.747138 systemd[1]: Started cri-containerd-d626843ddcef0f249ffd88150302d31060667412e9789fbbe56d10c0dddb91cd.scope - libcontainer container d626843ddcef0f249ffd88150302d31060667412e9789fbbe56d10c0dddb91cd. Mar 17 17:53:34.791768 containerd[1488]: time="2025-03-17T17:53:34.791706006Z" level=info msg="StartContainer for \"d626843ddcef0f249ffd88150302d31060667412e9789fbbe56d10c0dddb91cd\" returns successfully" Mar 17 17:53:34.854131 kubelet[2671]: I0317 17:53:34.853788 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69489b6559-jrsd6" podStartSLOduration=27.925092084 podStartE2EDuration="32.85373419s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="2025-03-17 17:53:25.94142586 +0000 UTC m=+42.457431266" lastFinishedPulling="2025-03-17 17:53:30.870067956 +0000 UTC m=+47.386073372" observedRunningTime="2025-03-17 17:53:32.043583294 +0000 UTC m=+48.559588710" watchObservedRunningTime="2025-03-17 17:53:34.85373419 +0000 UTC m=+51.369739606" Mar 17 17:53:34.869227 kubelet[2671]: I0317 17:53:34.868576 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69489b6559-dszht" podStartSLOduration=24.187015425 podStartE2EDuration="32.868555514s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="2025-03-17 17:53:25.995416502 +0000 UTC m=+42.511421898" lastFinishedPulling="2025-03-17 17:53:34.676956571 +0000 UTC m=+51.192961987" observedRunningTime="2025-03-17 17:53:34.856015241 +0000 UTC m=+51.372020647" watchObservedRunningTime="2025-03-17 17:53:34.868555514 +0000 UTC m=+51.384560920" Mar 17 17:53:34.918072 kubelet[2671]: I0317 17:53:34.917999 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-766ddbd77b-gscbd" podStartSLOduration=23.694480833 podStartE2EDuration="31.917978604s" podCreationTimestamp="2025-03-17 17:53:03 +0000 UTC" firstStartedPulling="2025-03-17 17:53:25.976391263 +0000 UTC m=+42.492396669" lastFinishedPulling="2025-03-17 17:53:34.199889034 +0000 UTC m=+50.715894440" observedRunningTime="2025-03-17 17:53:34.869388668 +0000 UTC m=+51.385394084" watchObservedRunningTime="2025-03-17 17:53:34.917978604 +0000 UTC m=+51.433984010" Mar 17 17:53:36.176966 systemd[1]: Started sshd@11-10.0.0.117:22-10.0.0.1:57600.service - OpenSSH per-connection server daemon (10.0.0.1:57600). Mar 17 17:53:36.246336 sshd[5450]: Accepted publickey for core from 10.0.0.1 port 57600 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:36.248279 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:36.252671 systemd-logind[1472]: New session 12 of user core. Mar 17 17:53:36.262978 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 17 17:53:36.403506 sshd[5452]: Connection closed by 10.0.0.1 port 57600 Mar 17 17:53:36.403997 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:36.410684 systemd[1]: sshd@11-10.0.0.117:22-10.0.0.1:57600.service: Deactivated successfully. Mar 17 17:53:36.412551 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 17:53:36.414240 systemd-logind[1472]: Session 12 logged out. Waiting for processes to exit. Mar 17 17:53:36.419058 systemd[1]: Started sshd@12-10.0.0.117:22-10.0.0.1:57616.service - OpenSSH per-connection server daemon (10.0.0.1:57616). Mar 17 17:53:36.420270 systemd-logind[1472]: Removed session 12. Mar 17 17:53:36.455780 sshd[5468]: Accepted publickey for core from 10.0.0.1 port 57616 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:36.457554 sshd-session[5468]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:36.461613 systemd-logind[1472]: New session 13 of user core. Mar 17 17:53:36.470182 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 17 17:53:36.623519 sshd[5470]: Connection closed by 10.0.0.1 port 57616 Mar 17 17:53:36.624392 sshd-session[5468]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:36.634456 systemd[1]: sshd@12-10.0.0.117:22-10.0.0.1:57616.service: Deactivated successfully. Mar 17 17:53:36.636881 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 17:53:36.639873 systemd-logind[1472]: Session 13 logged out. Waiting for processes to exit. Mar 17 17:53:36.642112 systemd-logind[1472]: Removed session 13. Mar 17 17:53:36.651269 systemd[1]: Started sshd@13-10.0.0.117:22-10.0.0.1:57622.service - OpenSSH per-connection server daemon (10.0.0.1:57622). Mar 17 17:53:36.725102 sshd[5480]: Accepted publickey for core from 10.0.0.1 port 57622 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:36.727422 sshd-session[5480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:36.734549 systemd-logind[1472]: New session 14 of user core. Mar 17 17:53:36.738921 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 17 17:53:36.963844 containerd[1488]: time="2025-03-17T17:53:36.963793982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:36.965182 containerd[1488]: time="2025-03-17T17:53:36.965030392Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:53:36.966584 containerd[1488]: time="2025-03-17T17:53:36.966489510Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:36.968810 sshd[5486]: Connection closed by 10.0.0.1 port 57622 Mar 17 17:53:36.969473 containerd[1488]: time="2025-03-17T17:53:36.969209304Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:53:36.969338 sshd-session[5480]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:36.971027 containerd[1488]: time="2025-03-17T17:53:36.970091799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 2.292851916s" Mar 17 17:53:36.971027 containerd[1488]: time="2025-03-17T17:53:36.970131834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:53:36.972531 containerd[1488]: time="2025-03-17T17:53:36.972497042Z" level=info msg="CreateContainer within sandbox \"5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:53:36.975442 systemd[1]: sshd@13-10.0.0.117:22-10.0.0.1:57622.service: Deactivated successfully. Mar 17 17:53:36.978140 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 17:53:36.979473 systemd-logind[1472]: Session 14 logged out. Waiting for processes to exit. Mar 17 17:53:36.980735 systemd-logind[1472]: Removed session 14. Mar 17 17:53:36.997314 containerd[1488]: time="2025-03-17T17:53:36.997245251Z" level=info msg="CreateContainer within sandbox \"5897afb16daeed0010d870af0ad8b2a6607cc41ba51539ba0ba848a5ccccc4c5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"706ea023065587e17e83c142130880e3c2765c32ba0cdb951a2278172622ff67\"" Mar 17 17:53:36.997805 containerd[1488]: time="2025-03-17T17:53:36.997779504Z" level=info msg="StartContainer for \"706ea023065587e17e83c142130880e3c2765c32ba0cdb951a2278172622ff67\"" Mar 17 17:53:37.030967 systemd[1]: Started cri-containerd-706ea023065587e17e83c142130880e3c2765c32ba0cdb951a2278172622ff67.scope - libcontainer container 706ea023065587e17e83c142130880e3c2765c32ba0cdb951a2278172622ff67. Mar 17 17:53:37.073331 containerd[1488]: time="2025-03-17T17:53:37.073261615Z" level=info msg="StartContainer for \"706ea023065587e17e83c142130880e3c2765c32ba0cdb951a2278172622ff67\" returns successfully" Mar 17 17:53:37.632511 kubelet[2671]: I0317 17:53:37.632450 2671 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:53:37.632511 kubelet[2671]: I0317 17:53:37.632497 2671 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:53:37.870210 kubelet[2671]: I0317 17:53:37.869797 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qsx6f" podStartSLOduration=24.800637858 podStartE2EDuration="35.869779356s" podCreationTimestamp="2025-03-17 17:53:02 +0000 UTC" firstStartedPulling="2025-03-17 17:53:25.902044264 +0000 UTC m=+42.418049670" lastFinishedPulling="2025-03-17 17:53:36.971185752 +0000 UTC m=+53.487191168" observedRunningTime="2025-03-17 17:53:37.869444328 +0000 UTC m=+54.385449734" watchObservedRunningTime="2025-03-17 17:53:37.869779356 +0000 UTC m=+54.385784762" Mar 17 17:53:41.985432 systemd[1]: Started sshd@14-10.0.0.117:22-10.0.0.1:57636.service - OpenSSH per-connection server daemon (10.0.0.1:57636). Mar 17 17:53:42.042437 sshd[5548]: Accepted publickey for core from 10.0.0.1 port 57636 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:42.044991 sshd-session[5548]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:42.052873 systemd-logind[1472]: New session 15 of user core. Mar 17 17:53:42.069082 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 17 17:53:42.132392 kubelet[2671]: I0317 17:53:42.132332 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:53:43.031079 sshd[5550]: Connection closed by 10.0.0.1 port 57636 Mar 17 17:53:43.031597 sshd-session[5548]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:43.039030 systemd[1]: sshd@14-10.0.0.117:22-10.0.0.1:57636.service: Deactivated successfully. Mar 17 17:53:43.042528 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 17:53:43.044251 systemd-logind[1472]: Session 15 logged out. Waiting for processes to exit. Mar 17 17:53:43.045553 systemd-logind[1472]: Removed session 15. Mar 17 17:53:43.566074 containerd[1488]: time="2025-03-17T17:53:43.566021561Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" Mar 17 17:53:43.566576 containerd[1488]: time="2025-03-17T17:53:43.566148829Z" level=info msg="TearDown network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" successfully" Mar 17 17:53:43.566576 containerd[1488]: time="2025-03-17T17:53:43.566159630Z" level=info msg="StopPodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" returns successfully" Mar 17 17:53:43.566674 containerd[1488]: time="2025-03-17T17:53:43.566634229Z" level=info msg="RemovePodSandbox for \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" Mar 17 17:53:43.577428 containerd[1488]: time="2025-03-17T17:53:43.577367875Z" level=info msg="Forcibly stopping sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\"" Mar 17 17:53:43.577571 containerd[1488]: time="2025-03-17T17:53:43.577523969Z" level=info msg="TearDown network for sandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" successfully" Mar 17 17:53:43.638587 containerd[1488]: time="2025-03-17T17:53:43.638492731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.638794 containerd[1488]: time="2025-03-17T17:53:43.638635449Z" level=info msg="RemovePodSandbox \"5962b8587e49b332a9082bc85f0dfd8758737d6294c296fdd2975f2284c3de54\" returns successfully" Mar 17 17:53:43.640297 containerd[1488]: time="2025-03-17T17:53:43.639981043Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\"" Mar 17 17:53:43.640297 containerd[1488]: time="2025-03-17T17:53:43.640242282Z" level=info msg="TearDown network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" successfully" Mar 17 17:53:43.640297 containerd[1488]: time="2025-03-17T17:53:43.640256770Z" level=info msg="StopPodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" returns successfully" Mar 17 17:53:43.640724 containerd[1488]: time="2025-03-17T17:53:43.640662151Z" level=info msg="RemovePodSandbox for \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\"" Mar 17 17:53:43.640724 containerd[1488]: time="2025-03-17T17:53:43.640720189Z" level=info msg="Forcibly stopping sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\"" Mar 17 17:53:43.640907 containerd[1488]: time="2025-03-17T17:53:43.640833602Z" level=info msg="TearDown network for sandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" successfully" Mar 17 17:53:43.649437 containerd[1488]: time="2025-03-17T17:53:43.649351832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.649619 containerd[1488]: time="2025-03-17T17:53:43.649452961Z" level=info msg="RemovePodSandbox \"176cab8d16eb48ec4a1ad772e737215f15a45779cf10e66d9d17ae04d002535e\" returns successfully" Mar 17 17:53:43.650214 containerd[1488]: time="2025-03-17T17:53:43.650159447Z" level=info msg="StopPodSandbox for \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\"" Mar 17 17:53:43.650356 containerd[1488]: time="2025-03-17T17:53:43.650332381Z" level=info msg="TearDown network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" successfully" Mar 17 17:53:43.650356 containerd[1488]: time="2025-03-17T17:53:43.650350646Z" level=info msg="StopPodSandbox for \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" returns successfully" Mar 17 17:53:43.650902 containerd[1488]: time="2025-03-17T17:53:43.650846094Z" level=info msg="RemovePodSandbox for \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\"" Mar 17 17:53:43.650902 containerd[1488]: time="2025-03-17T17:53:43.650889506Z" level=info msg="Forcibly stopping sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\"" Mar 17 17:53:43.651104 containerd[1488]: time="2025-03-17T17:53:43.650991277Z" level=info msg="TearDown network for sandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" successfully" Mar 17 17:53:43.656716 containerd[1488]: time="2025-03-17T17:53:43.656630777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.656716 containerd[1488]: time="2025-03-17T17:53:43.656730103Z" level=info msg="RemovePodSandbox \"cbfcdfa185d47daba995028114994971b84306063bc1d18e03db01de2f340836\" returns successfully" Mar 17 17:53:43.657373 containerd[1488]: time="2025-03-17T17:53:43.657325499Z" level=info msg="StopPodSandbox for \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\"" Mar 17 17:53:43.657520 containerd[1488]: time="2025-03-17T17:53:43.657458539Z" level=info msg="TearDown network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\" successfully" Mar 17 17:53:43.657520 containerd[1488]: time="2025-03-17T17:53:43.657472986Z" level=info msg="StopPodSandbox for \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\" returns successfully" Mar 17 17:53:43.658904 containerd[1488]: time="2025-03-17T17:53:43.658808432Z" level=info msg="RemovePodSandbox for \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\"" Mar 17 17:53:43.658904 containerd[1488]: time="2025-03-17T17:53:43.658867322Z" level=info msg="Forcibly stopping sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\"" Mar 17 17:53:43.659100 containerd[1488]: time="2025-03-17T17:53:43.659029416Z" level=info msg="TearDown network for sandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\" successfully" Mar 17 17:53:43.666279 containerd[1488]: time="2025-03-17T17:53:43.666197743Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.666443 containerd[1488]: time="2025-03-17T17:53:43.666302539Z" level=info msg="RemovePodSandbox \"aaec777f840a59eec027949e0f6cad120ff347386dd452f4fbbe9e66ad082614\" returns successfully" Mar 17 17:53:43.666936 containerd[1488]: time="2025-03-17T17:53:43.666887838Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" Mar 17 17:53:43.667069 containerd[1488]: time="2025-03-17T17:53:43.667039452Z" level=info msg="TearDown network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" successfully" Mar 17 17:53:43.667178 containerd[1488]: time="2025-03-17T17:53:43.667064910Z" level=info msg="StopPodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" returns successfully" Mar 17 17:53:43.667616 containerd[1488]: time="2025-03-17T17:53:43.667438641Z" level=info msg="RemovePodSandbox for \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" Mar 17 17:53:43.667616 containerd[1488]: time="2025-03-17T17:53:43.667474489Z" level=info msg="Forcibly stopping sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\"" Mar 17 17:53:43.667616 containerd[1488]: time="2025-03-17T17:53:43.667567263Z" level=info msg="TearDown network for sandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" successfully" Mar 17 17:53:43.674220 containerd[1488]: time="2025-03-17T17:53:43.674150412Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.674392 containerd[1488]: time="2025-03-17T17:53:43.674240441Z" level=info msg="RemovePodSandbox \"9b1baf90cf663bc9dfb45347bae6405dc471fcfba01fa47176b720b95c1e4f16\" returns successfully" Mar 17 17:53:43.676199 containerd[1488]: time="2025-03-17T17:53:43.676130847Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\"" Mar 17 17:53:43.676350 containerd[1488]: time="2025-03-17T17:53:43.676306036Z" level=info msg="TearDown network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" successfully" Mar 17 17:53:43.676350 containerd[1488]: time="2025-03-17T17:53:43.676325873Z" level=info msg="StopPodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" returns successfully" Mar 17 17:53:43.676741 containerd[1488]: time="2025-03-17T17:53:43.676634361Z" level=info msg="RemovePodSandbox for \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\"" Mar 17 17:53:43.676741 containerd[1488]: time="2025-03-17T17:53:43.676671471Z" level=info msg="Forcibly stopping sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\"" Mar 17 17:53:43.676961 containerd[1488]: time="2025-03-17T17:53:43.676847591Z" level=info msg="TearDown network for sandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" successfully" Mar 17 17:53:43.687863 containerd[1488]: time="2025-03-17T17:53:43.687784338Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.687863 containerd[1488]: time="2025-03-17T17:53:43.687868385Z" level=info msg="RemovePodSandbox \"57f44e3d7d20aa335e973a8243ab91cb12b6ece161a16a83acbf9d841fd3ff52\" returns successfully" Mar 17 17:53:43.688427 containerd[1488]: time="2025-03-17T17:53:43.688381829Z" level=info msg="StopPodSandbox for \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\"" Mar 17 17:53:43.688606 containerd[1488]: time="2025-03-17T17:53:43.688551086Z" level=info msg="TearDown network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" successfully" Mar 17 17:53:43.688606 containerd[1488]: time="2025-03-17T17:53:43.688574530Z" level=info msg="StopPodSandbox for \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" returns successfully" Mar 17 17:53:43.688997 containerd[1488]: time="2025-03-17T17:53:43.688963780Z" level=info msg="RemovePodSandbox for \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\"" Mar 17 17:53:43.688997 containerd[1488]: time="2025-03-17T17:53:43.688986823Z" level=info msg="Forcibly stopping sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\"" Mar 17 17:53:43.689121 containerd[1488]: time="2025-03-17T17:53:43.689054941Z" level=info msg="TearDown network for sandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" successfully" Mar 17 17:53:43.694258 containerd[1488]: time="2025-03-17T17:53:43.694180436Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.694258 containerd[1488]: time="2025-03-17T17:53:43.694250548Z" level=info msg="RemovePodSandbox \"63613426308ea68474b35acdb7e40f20b69558a0485f098231782a66c337bfc3\" returns successfully" Mar 17 17:53:43.694855 containerd[1488]: time="2025-03-17T17:53:43.694627115Z" level=info msg="StopPodSandbox for \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\"" Mar 17 17:53:43.694855 containerd[1488]: time="2025-03-17T17:53:43.694764623Z" level=info msg="TearDown network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\" successfully" Mar 17 17:53:43.694855 containerd[1488]: time="2025-03-17T17:53:43.694778449Z" level=info msg="StopPodSandbox for \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\" returns successfully" Mar 17 17:53:43.695183 containerd[1488]: time="2025-03-17T17:53:43.695156417Z" level=info msg="RemovePodSandbox for \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\"" Mar 17 17:53:43.695235 containerd[1488]: time="2025-03-17T17:53:43.695183047Z" level=info msg="Forcibly stopping sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\"" Mar 17 17:53:43.695307 containerd[1488]: time="2025-03-17T17:53:43.695260883Z" level=info msg="TearDown network for sandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\" successfully" Mar 17 17:53:43.701678 containerd[1488]: time="2025-03-17T17:53:43.701540924Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.701678 containerd[1488]: time="2025-03-17T17:53:43.701611266Z" level=info msg="RemovePodSandbox \"623e64fb9e5e6f41bea8c6f12eb691e03c1dd51f85d1f439c4e6c0d3dfcaaea6\" returns successfully" Mar 17 17:53:43.702453 containerd[1488]: time="2025-03-17T17:53:43.702409945Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" Mar 17 17:53:43.702656 containerd[1488]: time="2025-03-17T17:53:43.702553915Z" level=info msg="TearDown network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" successfully" Mar 17 17:53:43.702656 containerd[1488]: time="2025-03-17T17:53:43.702618977Z" level=info msg="StopPodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" returns successfully" Mar 17 17:53:43.703376 containerd[1488]: time="2025-03-17T17:53:43.703316265Z" level=info msg="RemovePodSandbox for \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" Mar 17 17:53:43.703376 containerd[1488]: time="2025-03-17T17:53:43.703352573Z" level=info msg="Forcibly stopping sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\"" Mar 17 17:53:43.703601 containerd[1488]: time="2025-03-17T17:53:43.703428325Z" level=info msg="TearDown network for sandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" successfully" Mar 17 17:53:43.708635 containerd[1488]: time="2025-03-17T17:53:43.708562446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.708635 containerd[1488]: time="2025-03-17T17:53:43.708635303Z" level=info msg="RemovePodSandbox \"c1034c3eebf7eb83c1c802440d743441949ce78d89de03e10162996b5658f0f8\" returns successfully" Mar 17 17:53:43.709346 containerd[1488]: time="2025-03-17T17:53:43.709294639Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\"" Mar 17 17:53:43.709506 containerd[1488]: time="2025-03-17T17:53:43.709480087Z" level=info msg="TearDown network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" successfully" Mar 17 17:53:43.709506 containerd[1488]: time="2025-03-17T17:53:43.709499734Z" level=info msg="StopPodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" returns successfully" Mar 17 17:53:43.710051 containerd[1488]: time="2025-03-17T17:53:43.709995444Z" level=info msg="RemovePodSandbox for \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\"" Mar 17 17:53:43.710051 containerd[1488]: time="2025-03-17T17:53:43.710048063Z" level=info msg="Forcibly stopping sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\"" Mar 17 17:53:43.710238 containerd[1488]: time="2025-03-17T17:53:43.710153952Z" level=info msg="TearDown network for sandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" successfully" Mar 17 17:53:43.715567 containerd[1488]: time="2025-03-17T17:53:43.715496469Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.715650 containerd[1488]: time="2025-03-17T17:53:43.715593051Z" level=info msg="RemovePodSandbox \"b18aaf4dc1e6bac2d1f261a436d415aa7e4affae8e761cea7a281aa750d3614f\" returns successfully" Mar 17 17:53:43.716326 containerd[1488]: time="2025-03-17T17:53:43.716272309Z" level=info msg="StopPodSandbox for \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\"" Mar 17 17:53:43.716506 containerd[1488]: time="2025-03-17T17:53:43.716477917Z" level=info msg="TearDown network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" successfully" Mar 17 17:53:43.716506 containerd[1488]: time="2025-03-17T17:53:43.716496812Z" level=info msg="StopPodSandbox for \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" returns successfully" Mar 17 17:53:43.716874 containerd[1488]: time="2025-03-17T17:53:43.716837534Z" level=info msg="RemovePodSandbox for \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\"" Mar 17 17:53:43.716924 containerd[1488]: time="2025-03-17T17:53:43.716872089Z" level=info msg="Forcibly stopping sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\"" Mar 17 17:53:43.717009 containerd[1488]: time="2025-03-17T17:53:43.716959624Z" level=info msg="TearDown network for sandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" successfully" Mar 17 17:53:43.722582 containerd[1488]: time="2025-03-17T17:53:43.722422912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.722582 containerd[1488]: time="2025-03-17T17:53:43.722529533Z" level=info msg="RemovePodSandbox \"5b1fcdeab2ed9d45495104daee6d6b97bdd93dbb8b9f3495cf0599df51059be3\" returns successfully" Mar 17 17:53:43.723091 containerd[1488]: time="2025-03-17T17:53:43.723036227Z" level=info msg="StopPodSandbox for \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\"" Mar 17 17:53:43.723238 containerd[1488]: time="2025-03-17T17:53:43.723194184Z" level=info msg="TearDown network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\" successfully" Mar 17 17:53:43.723238 containerd[1488]: time="2025-03-17T17:53:43.723210454Z" level=info msg="StopPodSandbox for \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\" returns successfully" Mar 17 17:53:43.723767 containerd[1488]: time="2025-03-17T17:53:43.723713081Z" level=info msg="RemovePodSandbox for \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\"" Mar 17 17:53:43.723833 containerd[1488]: time="2025-03-17T17:53:43.723780338Z" level=info msg="Forcibly stopping sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\"" Mar 17 17:53:43.723968 containerd[1488]: time="2025-03-17T17:53:43.723906435Z" level=info msg="TearDown network for sandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\" successfully" Mar 17 17:53:43.729991 containerd[1488]: time="2025-03-17T17:53:43.729921451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.730128 containerd[1488]: time="2025-03-17T17:53:43.730006472Z" level=info msg="RemovePodSandbox \"f6ade871d04fe30430528350a31e805e23bad6295918256ec7cc4923b4a642d7\" returns successfully" Mar 17 17:53:43.730672 containerd[1488]: time="2025-03-17T17:53:43.730622141Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" Mar 17 17:53:43.730810 containerd[1488]: time="2025-03-17T17:53:43.730786040Z" level=info msg="TearDown network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" successfully" Mar 17 17:53:43.730810 containerd[1488]: time="2025-03-17T17:53:43.730801319Z" level=info msg="StopPodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" returns successfully" Mar 17 17:53:43.731326 containerd[1488]: time="2025-03-17T17:53:43.731290480Z" level=info msg="RemovePodSandbox for \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" Mar 17 17:53:43.731429 containerd[1488]: time="2025-03-17T17:53:43.731389446Z" level=info msg="Forcibly stopping sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\"" Mar 17 17:53:43.731576 containerd[1488]: time="2025-03-17T17:53:43.731502589Z" level=info msg="TearDown network for sandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" successfully" Mar 17 17:53:43.736104 containerd[1488]: time="2025-03-17T17:53:43.736046726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.736199 containerd[1488]: time="2025-03-17T17:53:43.736118581Z" level=info msg="RemovePodSandbox \"9cb7b7521a0edc0a9c7a9d2e5cd84a22265bcf5e96d3aec1ac00f3807c3638f5\" returns successfully" Mar 17 17:53:43.736621 containerd[1488]: time="2025-03-17T17:53:43.736578547Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\"" Mar 17 17:53:43.736826 containerd[1488]: time="2025-03-17T17:53:43.736672054Z" level=info msg="TearDown network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" successfully" Mar 17 17:53:43.736826 containerd[1488]: time="2025-03-17T17:53:43.736682463Z" level=info msg="StopPodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" returns successfully" Mar 17 17:53:43.737120 containerd[1488]: time="2025-03-17T17:53:43.737016583Z" level=info msg="RemovePodSandbox for \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\"" Mar 17 17:53:43.737120 containerd[1488]: time="2025-03-17T17:53:43.737037823Z" level=info msg="Forcibly stopping sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\"" Mar 17 17:53:43.737188 containerd[1488]: time="2025-03-17T17:53:43.737117172Z" level=info msg="TearDown network for sandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" successfully" Mar 17 17:53:43.741416 containerd[1488]: time="2025-03-17T17:53:43.741359681Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.741416 containerd[1488]: time="2025-03-17T17:53:43.741414734Z" level=info msg="RemovePodSandbox \"13f8945e1ab3a958676fec549614b5e42e210cc72e620db9b796aef13f21942a\" returns successfully" Mar 17 17:53:43.742046 containerd[1488]: time="2025-03-17T17:53:43.741981211Z" level=info msg="StopPodSandbox for \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\"" Mar 17 17:53:43.742197 containerd[1488]: time="2025-03-17T17:53:43.742173894Z" level=info msg="TearDown network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" successfully" Mar 17 17:53:43.742197 containerd[1488]: time="2025-03-17T17:53:43.742190956Z" level=info msg="StopPodSandbox for \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" returns successfully" Mar 17 17:53:43.742761 containerd[1488]: time="2025-03-17T17:53:43.742715775Z" level=info msg="RemovePodSandbox for \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\"" Mar 17 17:53:43.742810 containerd[1488]: time="2025-03-17T17:53:43.742765137Z" level=info msg="Forcibly stopping sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\"" Mar 17 17:53:43.742901 containerd[1488]: time="2025-03-17T17:53:43.742854766Z" level=info msg="TearDown network for sandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" successfully" Mar 17 17:53:43.746950 containerd[1488]: time="2025-03-17T17:53:43.746899462Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.747055 containerd[1488]: time="2025-03-17T17:53:43.746975876Z" level=info msg="RemovePodSandbox \"f216292d57491cc6fc3659530ec469fedb94fdc844a69e71fc7d4ac5b63c4a04\" returns successfully" Mar 17 17:53:43.747566 containerd[1488]: time="2025-03-17T17:53:43.747510593Z" level=info msg="StopPodSandbox for \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\"" Mar 17 17:53:43.747723 containerd[1488]: time="2025-03-17T17:53:43.747677438Z" level=info msg="TearDown network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\" successfully" Mar 17 17:53:43.747723 containerd[1488]: time="2025-03-17T17:53:43.747704338Z" level=info msg="StopPodSandbox for \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\" returns successfully" Mar 17 17:53:43.748169 containerd[1488]: time="2025-03-17T17:53:43.748134979Z" level=info msg="RemovePodSandbox for \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\"" Mar 17 17:53:43.748169 containerd[1488]: time="2025-03-17T17:53:43.748166258Z" level=info msg="Forcibly stopping sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\"" Mar 17 17:53:43.748331 containerd[1488]: time="2025-03-17T17:53:43.748249063Z" level=info msg="TearDown network for sandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\" successfully" Mar 17 17:53:43.753130 containerd[1488]: time="2025-03-17T17:53:43.753055705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.753289 containerd[1488]: time="2025-03-17T17:53:43.753138120Z" level=info msg="RemovePodSandbox \"addc4388aaecc53e4027b4317abc812f9b90d7924ae049e5363a6f08b86b9e26\" returns successfully" Mar 17 17:53:43.753731 containerd[1488]: time="2025-03-17T17:53:43.753676594Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" Mar 17 17:53:43.753909 containerd[1488]: time="2025-03-17T17:53:43.753878505Z" level=info msg="TearDown network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" successfully" Mar 17 17:53:43.753909 containerd[1488]: time="2025-03-17T17:53:43.753890818Z" level=info msg="StopPodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" returns successfully" Mar 17 17:53:43.754290 containerd[1488]: time="2025-03-17T17:53:43.754238212Z" level=info msg="RemovePodSandbox for \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" Mar 17 17:53:43.754290 containerd[1488]: time="2025-03-17T17:53:43.754268829Z" level=info msg="Forcibly stopping sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\"" Mar 17 17:53:43.754400 containerd[1488]: time="2025-03-17T17:53:43.754353669Z" level=info msg="TearDown network for sandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" successfully" Mar 17 17:53:43.759567 containerd[1488]: time="2025-03-17T17:53:43.759512994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.759567 containerd[1488]: time="2025-03-17T17:53:43.759580512Z" level=info msg="RemovePodSandbox \"d1b3fd7f85c932731e8c07457956860f6659034f3ef9af6a5a1aab299e19f071\" returns successfully" Mar 17 17:53:43.760109 containerd[1488]: time="2025-03-17T17:53:43.760058151Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\"" Mar 17 17:53:43.760309 containerd[1488]: time="2025-03-17T17:53:43.760212752Z" level=info msg="TearDown network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" successfully" Mar 17 17:53:43.760309 containerd[1488]: time="2025-03-17T17:53:43.760226819Z" level=info msg="StopPodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" returns successfully" Mar 17 17:53:43.760586 containerd[1488]: time="2025-03-17T17:53:43.760541451Z" level=info msg="RemovePodSandbox for \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\"" Mar 17 17:53:43.760586 containerd[1488]: time="2025-03-17T17:53:43.760578421Z" level=info msg="Forcibly stopping sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\"" Mar 17 17:53:43.760787 containerd[1488]: time="2025-03-17T17:53:43.760681966Z" level=info msg="TearDown network for sandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" successfully" Mar 17 17:53:43.766280 containerd[1488]: time="2025-03-17T17:53:43.766218090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.766280 containerd[1488]: time="2025-03-17T17:53:43.766283514Z" level=info msg="RemovePodSandbox \"961747fb411bb6f89447f707684a2226650cafaa31636216d5776b0f926bbb52\" returns successfully" Mar 17 17:53:43.766630 containerd[1488]: time="2025-03-17T17:53:43.766593096Z" level=info msg="StopPodSandbox for \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\"" Mar 17 17:53:43.766840 containerd[1488]: time="2025-03-17T17:53:43.766718783Z" level=info msg="TearDown network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" successfully" Mar 17 17:53:43.766840 containerd[1488]: time="2025-03-17T17:53:43.766731226Z" level=info msg="StopPodSandbox for \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" returns successfully" Mar 17 17:53:43.767132 containerd[1488]: time="2025-03-17T17:53:43.767106263Z" level=info msg="RemovePodSandbox for \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\"" Mar 17 17:53:43.767168 containerd[1488]: time="2025-03-17T17:53:43.767136620Z" level=info msg="Forcibly stopping sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\"" Mar 17 17:53:43.767359 containerd[1488]: time="2025-03-17T17:53:43.767220878Z" level=info msg="TearDown network for sandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" successfully" Mar 17 17:53:43.773009 containerd[1488]: time="2025-03-17T17:53:43.772935751Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.773159 containerd[1488]: time="2025-03-17T17:53:43.773024166Z" level=info msg="RemovePodSandbox \"50fa988d10b27c15b48a1ea2bec4930d2f20f85a00d0e3ced81e01ac385b6687\" returns successfully" Mar 17 17:53:43.773543 containerd[1488]: time="2025-03-17T17:53:43.773508498Z" level=info msg="StopPodSandbox for \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\"" Mar 17 17:53:43.773659 containerd[1488]: time="2025-03-17T17:53:43.773638163Z" level=info msg="TearDown network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\" successfully" Mar 17 17:53:43.773684 containerd[1488]: time="2025-03-17T17:53:43.773656327Z" level=info msg="StopPodSandbox for \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\" returns successfully" Mar 17 17:53:43.774105 containerd[1488]: time="2025-03-17T17:53:43.774057753Z" level=info msg="RemovePodSandbox for \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\"" Mar 17 17:53:43.774105 containerd[1488]: time="2025-03-17T17:53:43.774098339Z" level=info msg="Forcibly stopping sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\"" Mar 17 17:53:43.774369 containerd[1488]: time="2025-03-17T17:53:43.774220169Z" level=info msg="TearDown network for sandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\" successfully" Mar 17 17:53:43.779094 containerd[1488]: time="2025-03-17T17:53:43.779028564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.779260 containerd[1488]: time="2025-03-17T17:53:43.779109525Z" level=info msg="RemovePodSandbox \"2cc9b00535c46d082be3a47e288cfcb08bce63a018f869868b0b96e94a1250a8\" returns successfully" Mar 17 17:53:43.779860 containerd[1488]: time="2025-03-17T17:53:43.779833088Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" Mar 17 17:53:43.779970 containerd[1488]: time="2025-03-17T17:53:43.779949718Z" level=info msg="TearDown network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" successfully" Mar 17 17:53:43.779970 containerd[1488]: time="2025-03-17T17:53:43.779966649Z" level=info msg="StopPodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" returns successfully" Mar 17 17:53:43.780626 containerd[1488]: time="2025-03-17T17:53:43.780425854Z" level=info msg="RemovePodSandbox for \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" Mar 17 17:53:43.780678 containerd[1488]: time="2025-03-17T17:53:43.780648885Z" level=info msg="Forcibly stopping sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\"" Mar 17 17:53:43.780863 containerd[1488]: time="2025-03-17T17:53:43.780809086Z" level=info msg="TearDown network for sandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" successfully" Mar 17 17:53:43.785457 containerd[1488]: time="2025-03-17T17:53:43.785370796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.785457 containerd[1488]: time="2025-03-17T17:53:43.785455516Z" level=info msg="RemovePodSandbox \"a84da3acd9cbe62971167bb8959e5f9495adbc898495c2ec84809002361e9cd9\" returns successfully" Mar 17 17:53:43.786044 containerd[1488]: time="2025-03-17T17:53:43.786008567Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\"" Mar 17 17:53:43.786218 containerd[1488]: time="2025-03-17T17:53:43.786139804Z" level=info msg="TearDown network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" successfully" Mar 17 17:53:43.786218 containerd[1488]: time="2025-03-17T17:53:43.786155634Z" level=info msg="StopPodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" returns successfully" Mar 17 17:53:43.787170 containerd[1488]: time="2025-03-17T17:53:43.786892672Z" level=info msg="RemovePodSandbox for \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\"" Mar 17 17:53:43.787170 containerd[1488]: time="2025-03-17T17:53:43.786922678Z" level=info msg="Forcibly stopping sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\"" Mar 17 17:53:43.787170 containerd[1488]: time="2025-03-17T17:53:43.787013850Z" level=info msg="TearDown network for sandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" successfully" Mar 17 17:53:43.791640 containerd[1488]: time="2025-03-17T17:53:43.791555031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.791640 containerd[1488]: time="2025-03-17T17:53:43.791630122Z" level=info msg="RemovePodSandbox \"5bf8d4c046883d5d0be09e30054996dffec7065148980daaf45a69a75939d8c6\" returns successfully" Mar 17 17:53:43.792214 containerd[1488]: time="2025-03-17T17:53:43.792175860Z" level=info msg="StopPodSandbox for \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\"" Mar 17 17:53:43.792356 containerd[1488]: time="2025-03-17T17:53:43.792330331Z" level=info msg="TearDown network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" successfully" Mar 17 17:53:43.792391 containerd[1488]: time="2025-03-17T17:53:43.792353555Z" level=info msg="StopPodSandbox for \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" returns successfully" Mar 17 17:53:43.792719 containerd[1488]: time="2025-03-17T17:53:43.792689948Z" level=info msg="RemovePodSandbox for \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\"" Mar 17 17:53:43.792780 containerd[1488]: time="2025-03-17T17:53:43.792719975Z" level=info msg="Forcibly stopping sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\"" Mar 17 17:53:43.792803 containerd[1488]: time="2025-03-17T17:53:43.792795327Z" level=info msg="TearDown network for sandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" successfully" Mar 17 17:53:43.797108 containerd[1488]: time="2025-03-17T17:53:43.797046492Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.797184 containerd[1488]: time="2025-03-17T17:53:43.797135490Z" level=info msg="RemovePodSandbox \"1d42c9f9298c30ac58c0af1beb3148eb9d10158f696d4c3025adf550cdae0287\" returns successfully" Mar 17 17:53:43.797857 containerd[1488]: time="2025-03-17T17:53:43.797807304Z" level=info msg="StopPodSandbox for \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\"" Mar 17 17:53:43.797998 containerd[1488]: time="2025-03-17T17:53:43.797957026Z" level=info msg="TearDown network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\" successfully" Mar 17 17:53:43.797998 containerd[1488]: time="2025-03-17T17:53:43.797970833Z" level=info msg="StopPodSandbox for \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\" returns successfully" Mar 17 17:53:43.798407 containerd[1488]: time="2025-03-17T17:53:43.798379993Z" level=info msg="RemovePodSandbox for \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\"" Mar 17 17:53:43.798457 containerd[1488]: time="2025-03-17T17:53:43.798407735Z" level=info msg="Forcibly stopping sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\"" Mar 17 17:53:43.798552 containerd[1488]: time="2025-03-17T17:53:43.798505590Z" level=info msg="TearDown network for sandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\" successfully" Mar 17 17:53:43.802614 containerd[1488]: time="2025-03-17T17:53:43.802561537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:53:43.802671 containerd[1488]: time="2025-03-17T17:53:43.802630056Z" level=info msg="RemovePodSandbox \"ecc718cef2e091b545b9384d23c03df742d33491f0f30bae0c5806eb2f85d02a\" returns successfully" Mar 17 17:53:48.053185 systemd[1]: Started sshd@15-10.0.0.117:22-10.0.0.1:41594.service - OpenSSH per-connection server daemon (10.0.0.1:41594). Mar 17 17:53:48.117053 sshd[5594]: Accepted publickey for core from 10.0.0.1 port 41594 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:48.118967 sshd-session[5594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:48.124021 systemd-logind[1472]: New session 16 of user core. Mar 17 17:53:48.131984 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 17 17:53:48.258362 sshd[5596]: Connection closed by 10.0.0.1 port 41594 Mar 17 17:53:48.258795 sshd-session[5594]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:48.269787 systemd[1]: sshd@15-10.0.0.117:22-10.0.0.1:41594.service: Deactivated successfully. Mar 17 17:53:48.272170 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 17:53:48.274011 systemd-logind[1472]: Session 16 logged out. Waiting for processes to exit. Mar 17 17:53:48.280148 systemd[1]: Started sshd@16-10.0.0.117:22-10.0.0.1:41604.service - OpenSSH per-connection server daemon (10.0.0.1:41604). Mar 17 17:53:48.281256 systemd-logind[1472]: Removed session 16. Mar 17 17:53:48.315854 sshd[5609]: Accepted publickey for core from 10.0.0.1 port 41604 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:48.317487 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:48.322021 systemd-logind[1472]: New session 17 of user core. Mar 17 17:53:48.333878 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 17 17:53:48.746185 sshd[5611]: Connection closed by 10.0.0.1 port 41604 Mar 17 17:53:48.746586 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:48.756871 systemd[1]: sshd@16-10.0.0.117:22-10.0.0.1:41604.service: Deactivated successfully. Mar 17 17:53:48.759575 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 17:53:48.762394 systemd-logind[1472]: Session 17 logged out. Waiting for processes to exit. Mar 17 17:53:48.772165 systemd[1]: Started sshd@17-10.0.0.117:22-10.0.0.1:41608.service - OpenSSH per-connection server daemon (10.0.0.1:41608). Mar 17 17:53:48.773312 systemd-logind[1472]: Removed session 17. Mar 17 17:53:48.813480 sshd[5621]: Accepted publickey for core from 10.0.0.1 port 41608 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:48.815321 sshd-session[5621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:48.819497 systemd-logind[1472]: New session 18 of user core. Mar 17 17:53:48.828967 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 17 17:53:50.973046 sshd[5623]: Connection closed by 10.0.0.1 port 41608 Mar 17 17:53:50.973719 sshd-session[5621]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:50.985330 systemd[1]: sshd@17-10.0.0.117:22-10.0.0.1:41608.service: Deactivated successfully. Mar 17 17:53:50.993619 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 17:53:50.996555 systemd-logind[1472]: Session 18 logged out. Waiting for processes to exit. Mar 17 17:53:51.005315 systemd[1]: Started sshd@18-10.0.0.117:22-10.0.0.1:41620.service - OpenSSH per-connection server daemon (10.0.0.1:41620). Mar 17 17:53:51.007394 systemd-logind[1472]: Removed session 18. Mar 17 17:53:51.050544 sshd[5663]: Accepted publickey for core from 10.0.0.1 port 41620 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:51.052692 sshd-session[5663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:51.057892 systemd-logind[1472]: New session 19 of user core. Mar 17 17:53:51.065947 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 17 17:53:51.308616 sshd[5665]: Connection closed by 10.0.0.1 port 41620 Mar 17 17:53:51.309580 sshd-session[5663]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:51.318824 systemd[1]: sshd@18-10.0.0.117:22-10.0.0.1:41620.service: Deactivated successfully. Mar 17 17:53:51.321381 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 17:53:51.323961 systemd-logind[1472]: Session 19 logged out. Waiting for processes to exit. Mar 17 17:53:51.333148 systemd[1]: Started sshd@19-10.0.0.117:22-10.0.0.1:41622.service - OpenSSH per-connection server daemon (10.0.0.1:41622). Mar 17 17:53:51.334284 systemd-logind[1472]: Removed session 19. Mar 17 17:53:51.367992 sshd[5676]: Accepted publickey for core from 10.0.0.1 port 41622 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:51.369778 sshd-session[5676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:51.374981 systemd-logind[1472]: New session 20 of user core. Mar 17 17:53:51.387975 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 17 17:53:51.502489 sshd[5678]: Connection closed by 10.0.0.1 port 41622 Mar 17 17:53:51.502954 sshd-session[5676]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:51.507968 systemd[1]: sshd@19-10.0.0.117:22-10.0.0.1:41622.service: Deactivated successfully. Mar 17 17:53:51.510672 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 17:53:51.511377 systemd-logind[1472]: Session 20 logged out. Waiting for processes to exit. Mar 17 17:53:51.512608 systemd-logind[1472]: Removed session 20. Mar 17 17:53:56.516010 systemd[1]: Started sshd@20-10.0.0.117:22-10.0.0.1:43366.service - OpenSSH per-connection server daemon (10.0.0.1:43366). Mar 17 17:53:56.565713 sshd[5694]: Accepted publickey for core from 10.0.0.1 port 43366 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:53:56.567822 sshd-session[5694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:53:56.572872 systemd-logind[1472]: New session 21 of user core. Mar 17 17:53:56.577895 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 17 17:53:56.726335 sshd[5696]: Connection closed by 10.0.0.1 port 43366 Mar 17 17:53:56.726828 sshd-session[5694]: pam_unix(sshd:session): session closed for user core Mar 17 17:53:56.731566 systemd[1]: sshd@20-10.0.0.117:22-10.0.0.1:43366.service: Deactivated successfully. Mar 17 17:53:56.733853 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 17:53:56.734633 systemd-logind[1472]: Session 21 logged out. Waiting for processes to exit. Mar 17 17:53:56.735602 systemd-logind[1472]: Removed session 21. Mar 17 17:53:58.576428 kubelet[2671]: E0317 17:53:58.576378 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:54:01.738148 systemd[1]: Started sshd@21-10.0.0.117:22-10.0.0.1:43374.service - OpenSSH per-connection server daemon (10.0.0.1:43374). Mar 17 17:54:01.786993 sshd[5735]: Accepted publickey for core from 10.0.0.1 port 43374 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:54:01.788959 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:01.793666 systemd-logind[1472]: New session 22 of user core. Mar 17 17:54:01.810004 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 17 17:54:01.938356 sshd[5737]: Connection closed by 10.0.0.1 port 43374 Mar 17 17:54:01.938765 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:01.943380 systemd[1]: sshd@21-10.0.0.117:22-10.0.0.1:43374.service: Deactivated successfully. Mar 17 17:54:01.946739 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 17:54:01.947616 systemd-logind[1472]: Session 22 logged out. Waiting for processes to exit. Mar 17 17:54:01.948713 systemd-logind[1472]: Removed session 22. Mar 17 17:54:06.575981 kubelet[2671]: E0317 17:54:06.575929 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:54:06.950009 systemd[1]: Started sshd@22-10.0.0.117:22-10.0.0.1:58076.service - OpenSSH per-connection server daemon (10.0.0.1:58076). Mar 17 17:54:06.990061 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:54:06.991876 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:06.996926 systemd-logind[1472]: New session 23 of user core. Mar 17 17:54:07.007983 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 17 17:54:07.122243 sshd[5759]: Connection closed by 10.0.0.1 port 58076 Mar 17 17:54:07.122622 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:07.125907 systemd[1]: sshd@22-10.0.0.117:22-10.0.0.1:58076.service: Deactivated successfully. Mar 17 17:54:07.128708 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 17:54:07.130576 systemd-logind[1472]: Session 23 logged out. Waiting for processes to exit. Mar 17 17:54:07.131544 systemd-logind[1472]: Removed session 23. Mar 17 17:54:09.576826 kubelet[2671]: E0317 17:54:09.576731 2671 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 17:54:12.135368 systemd[1]: Started sshd@23-10.0.0.117:22-10.0.0.1:58082.service - OpenSSH per-connection server daemon (10.0.0.1:58082). Mar 17 17:54:12.181201 sshd[5771]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:pvoNHoTmHcKIZ8E4rah4Xh4kuY0L81ONuagOsU2gN/o Mar 17 17:54:12.183349 sshd-session[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:54:12.191504 systemd-logind[1472]: New session 24 of user core. Mar 17 17:54:12.200151 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 17 17:54:12.440584 sshd[5773]: Connection closed by 10.0.0.1 port 58082 Mar 17 17:54:12.441955 sshd-session[5771]: pam_unix(sshd:session): session closed for user core Mar 17 17:54:12.451793 systemd[1]: sshd@23-10.0.0.117:22-10.0.0.1:58082.service: Deactivated successfully. Mar 17 17:54:12.455060 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 17:54:12.458204 systemd-logind[1472]: Session 24 logged out. Waiting for processes to exit. Mar 17 17:54:12.467861 systemd-logind[1472]: Removed session 24.