Feb 13 19:33:01.951107 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:33:01.951128 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:33:01.951140 kernel: BIOS-provided physical RAM map: Feb 13 19:33:01.951147 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:33:01.951153 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:33:01.951160 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:33:01.951167 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:33:01.951174 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:33:01.951181 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:33:01.951190 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:33:01.951197 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:33:01.951203 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:33:01.951210 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:33:01.951216 kernel: NX (Execute Disable) protection: active Feb 13 19:33:01.951224 kernel: APIC: Static calls initialized Feb 13 19:33:01.951235 kernel: SMBIOS 2.8 present. Feb 13 19:33:01.951242 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:33:01.951249 kernel: Hypervisor detected: KVM Feb 13 19:33:01.951256 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:33:01.951263 kernel: kvm-clock: using sched offset of 2358312007 cycles Feb 13 19:33:01.951271 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:33:01.951279 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:33:01.951286 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:33:01.951294 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:33:01.951301 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:33:01.951312 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:33:01.951319 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:33:01.951326 kernel: Using GB pages for direct mapping Feb 13 19:33:01.951334 kernel: ACPI: Early table checksum verification disabled Feb 13 19:33:01.951341 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:33:01.951348 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:01.951356 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:01.951363 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:01.951370 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:33:01.951380 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:01.951388 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:01.951395 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:01.951402 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:33:01.951410 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:33:01.951417 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:33:01.951428 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:33:01.951439 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:33:01.951446 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:33:01.951454 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:33:01.951461 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:33:01.951468 kernel: No NUMA configuration found Feb 13 19:33:01.951476 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:33:01.951484 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:33:01.951494 kernel: Zone ranges: Feb 13 19:33:01.951501 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:33:01.951509 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:33:01.951516 kernel: Normal empty Feb 13 19:33:01.951524 kernel: Movable zone start for each node Feb 13 19:33:01.951531 kernel: Early memory node ranges Feb 13 19:33:01.951539 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:33:01.951546 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:33:01.951553 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:33:01.951564 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:33:01.951571 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:33:01.951579 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:33:01.951586 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:33:01.951594 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:33:01.951601 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:33:01.951609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:33:01.951616 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:33:01.951624 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:33:01.951633 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:33:01.951641 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:33:01.951648 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:33:01.951656 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:33:01.951663 kernel: TSC deadline timer available Feb 13 19:33:01.951670 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:33:01.951678 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:33:01.951686 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:33:01.951693 kernel: kvm-guest: setup PV sched yield Feb 13 19:33:01.951701 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:33:01.951711 kernel: Booting paravirtualized kernel on KVM Feb 13 19:33:01.951719 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:33:01.951726 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:33:01.951734 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:33:01.951741 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:33:01.951749 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:33:01.951756 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:33:01.951763 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:33:01.951772 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:33:01.951783 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:33:01.951791 kernel: random: crng init done Feb 13 19:33:01.951799 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:33:01.951806 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:33:01.951814 kernel: Fallback order for Node 0: 0 Feb 13 19:33:01.951821 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:33:01.951829 kernel: Policy zone: DMA32 Feb 13 19:33:01.951836 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:33:01.951846 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 138948K reserved, 0K cma-reserved) Feb 13 19:33:01.951854 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:33:01.951862 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:33:01.951869 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:33:01.951877 kernel: Dynamic Preempt: voluntary Feb 13 19:33:01.951884 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:33:01.951892 kernel: rcu: RCU event tracing is enabled. Feb 13 19:33:01.951900 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:33:01.951917 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:33:01.951928 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:33:01.951936 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:33:01.951944 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:33:01.951951 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:33:01.951976 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:33:01.951984 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:33:01.951991 kernel: Console: colour VGA+ 80x25 Feb 13 19:33:01.951999 kernel: printk: console [ttyS0] enabled Feb 13 19:33:01.952006 kernel: ACPI: Core revision 20230628 Feb 13 19:33:01.952017 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:33:01.952024 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:33:01.952032 kernel: x2apic enabled Feb 13 19:33:01.952039 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:33:01.952047 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:33:01.952055 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:33:01.952062 kernel: kvm-guest: setup PV IPIs Feb 13 19:33:01.952090 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:33:01.952098 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:33:01.952107 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:33:01.952121 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:33:01.952135 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:33:01.952153 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:33:01.952168 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:33:01.952176 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:33:01.952197 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:33:01.952212 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:33:01.952230 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:33:01.952238 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:33:01.952259 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:33:01.952268 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:33:01.952276 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:33:01.952284 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:33:01.952292 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:33:01.952300 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:33:01.952311 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:33:01.952319 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:33:01.952326 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:33:01.952334 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:33:01.952342 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:33:01.952350 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:33:01.952358 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:33:01.952366 kernel: landlock: Up and running. Feb 13 19:33:01.952374 kernel: SELinux: Initializing. Feb 13 19:33:01.952384 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:33:01.952392 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:33:01.952400 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:33:01.952408 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:01.952417 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:01.952424 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:33:01.952432 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:33:01.952440 kernel: ... version: 0 Feb 13 19:33:01.952448 kernel: ... bit width: 48 Feb 13 19:33:01.952459 kernel: ... generic registers: 6 Feb 13 19:33:01.952467 kernel: ... value mask: 0000ffffffffffff Feb 13 19:33:01.952474 kernel: ... max period: 00007fffffffffff Feb 13 19:33:01.952482 kernel: ... fixed-purpose events: 0 Feb 13 19:33:01.952490 kernel: ... event mask: 000000000000003f Feb 13 19:33:01.952497 kernel: signal: max sigframe size: 1776 Feb 13 19:33:01.952505 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:33:01.952513 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:33:01.952521 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:33:01.952531 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:33:01.952539 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:33:01.952547 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:33:01.952555 kernel: smpboot: Max logical packages: 1 Feb 13 19:33:01.952563 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:33:01.952570 kernel: devtmpfs: initialized Feb 13 19:33:01.952578 kernel: x86/mm: Memory block size: 128MB Feb 13 19:33:01.952586 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:33:01.952594 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:33:01.952604 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:33:01.952612 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:33:01.952620 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:33:01.952628 kernel: audit: type=2000 audit(1739475180.705:1): state=initialized audit_enabled=0 res=1 Feb 13 19:33:01.952636 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:33:01.952644 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:33:01.952651 kernel: cpuidle: using governor menu Feb 13 19:33:01.952660 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:33:01.952668 kernel: dca service started, version 1.12.1 Feb 13 19:33:01.952678 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:33:01.952686 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:33:01.952694 kernel: PCI: Using configuration type 1 for base access Feb 13 19:33:01.952702 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:33:01.952710 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:33:01.952717 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:33:01.952725 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:33:01.952733 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:33:01.952741 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:33:01.952752 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:33:01.952759 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:33:01.952767 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:33:01.952775 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:33:01.952783 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:33:01.952790 kernel: ACPI: Interpreter enabled Feb 13 19:33:01.952798 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:33:01.952806 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:33:01.952814 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:33:01.952824 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:33:01.952832 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:33:01.952840 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:33:01.953043 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:33:01.953176 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:33:01.953299 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:33:01.953310 kernel: PCI host bridge to bus 0000:00 Feb 13 19:33:01.953511 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:33:01.953626 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:33:01.953772 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:33:01.953893 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:33:01.954029 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:33:01.954141 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:33:01.954253 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:33:01.954395 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:33:01.954527 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:33:01.954648 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:33:01.954770 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:33:01.954891 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:33:01.955092 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:33:01.955235 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:33:01.955365 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:33:01.955488 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:33:01.955611 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:33:01.955746 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:33:01.955870 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:33:01.956061 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:33:01.956188 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:33:01.956323 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:33:01.956446 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:33:01.956568 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:33:01.956691 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:33:01.956811 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:33:01.956954 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:33:01.957100 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:33:01.957231 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:33:01.957354 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:33:01.957476 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:33:01.957639 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:33:01.957767 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:33:01.957778 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:33:01.957791 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:33:01.957798 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:33:01.957806 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:33:01.957814 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:33:01.957822 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:33:01.957830 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:33:01.957838 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:33:01.957845 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:33:01.957853 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:33:01.957864 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:33:01.957871 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:33:01.957879 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:33:01.957887 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:33:01.957895 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:33:01.957902 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:33:01.957921 kernel: iommu: Default domain type: Translated Feb 13 19:33:01.957930 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:33:01.957938 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:33:01.957948 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:33:01.957967 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:33:01.957983 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:33:01.958116 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:33:01.958240 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:33:01.958361 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:33:01.958371 kernel: vgaarb: loaded Feb 13 19:33:01.958379 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:33:01.958391 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:33:01.958399 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:33:01.958407 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:33:01.958415 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:33:01.958423 kernel: pnp: PnP ACPI init Feb 13 19:33:01.958553 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:33:01.958565 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:33:01.958573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:33:01.958584 kernel: NET: Registered PF_INET protocol family Feb 13 19:33:01.958592 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:33:01.958600 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:33:01.958608 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:33:01.958616 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:33:01.958624 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:33:01.958632 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:33:01.958640 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:33:01.958647 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:33:01.958658 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:33:01.958666 kernel: NET: Registered PF_XDP protocol family Feb 13 19:33:01.958777 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:33:01.958888 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:33:01.959021 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:33:01.959133 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:33:01.959242 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:33:01.959351 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:33:01.959366 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:33:01.959374 kernel: Initialise system trusted keyrings Feb 13 19:33:01.959382 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:33:01.959389 kernel: Key type asymmetric registered Feb 13 19:33:01.959397 kernel: Asymmetric key parser 'x509' registered Feb 13 19:33:01.959405 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:33:01.959413 kernel: io scheduler mq-deadline registered Feb 13 19:33:01.959421 kernel: io scheduler kyber registered Feb 13 19:33:01.959429 kernel: io scheduler bfq registered Feb 13 19:33:01.959436 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:33:01.959447 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:33:01.959455 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:33:01.959463 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:33:01.959471 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:33:01.959479 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:33:01.959487 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:33:01.959495 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:33:01.959503 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:33:01.959629 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:33:01.959748 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:33:01.959759 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:33:01.959871 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:33:01 UTC (1739475181) Feb 13 19:33:01.960006 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:33:01.960017 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:33:01.960025 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:33:01.960033 kernel: Segment Routing with IPv6 Feb 13 19:33:01.960045 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:33:01.960053 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:33:01.960060 kernel: Key type dns_resolver registered Feb 13 19:33:01.960068 kernel: IPI shorthand broadcast: enabled Feb 13 19:33:01.960076 kernel: sched_clock: Marking stable (859003751, 107316849)->(1019595356, -53274756) Feb 13 19:33:01.960084 kernel: registered taskstats version 1 Feb 13 19:33:01.960092 kernel: Loading compiled-in X.509 certificates Feb 13 19:33:01.960100 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:33:01.960108 kernel: Key type .fscrypt registered Feb 13 19:33:01.960116 kernel: Key type fscrypt-provisioning registered Feb 13 19:33:01.960127 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:33:01.960135 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:33:01.960143 kernel: ima: No architecture policies found Feb 13 19:33:01.960151 kernel: clk: Disabling unused clocks Feb 13 19:33:01.960158 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:33:01.960166 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:33:01.960174 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:33:01.960182 kernel: Run /init as init process Feb 13 19:33:01.960192 kernel: with arguments: Feb 13 19:33:01.960200 kernel: /init Feb 13 19:33:01.960208 kernel: with environment: Feb 13 19:33:01.960216 kernel: HOME=/ Feb 13 19:33:01.960223 kernel: TERM=linux Feb 13 19:33:01.960231 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:33:01.960241 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:33:01.960252 systemd[1]: Detected virtualization kvm. Feb 13 19:33:01.960263 systemd[1]: Detected architecture x86-64. Feb 13 19:33:01.960271 systemd[1]: Running in initrd. Feb 13 19:33:01.960279 systemd[1]: No hostname configured, using default hostname. Feb 13 19:33:01.960288 systemd[1]: Hostname set to . Feb 13 19:33:01.960296 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:33:01.960305 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:33:01.960313 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:01.960322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:01.960334 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:33:01.960355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:33:01.960366 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:33:01.960375 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:33:01.960385 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:33:01.960397 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:33:01.960405 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:01.960414 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:01.960423 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:33:01.960431 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:33:01.960440 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:33:01.960449 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:33:01.960457 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:33:01.960468 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:33:01.960477 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:33:01.960486 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:33:01.960495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:01.960504 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:01.960512 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:01.960521 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:33:01.960529 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:33:01.960538 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:33:01.960549 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:33:01.960558 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:33:01.960567 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:33:01.960575 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:33:01.960584 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:01.960592 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:33:01.960601 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:01.960610 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:33:01.960639 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:33:01.960662 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:33:01.960673 systemd-journald[194]: Journal started Feb 13 19:33:01.960695 systemd-journald[194]: Runtime Journal (/run/log/journal/57adda220a3e4a44941a03c1dc52ed1c) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:33:01.953861 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:33:01.993239 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:33:01.993258 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:33:01.993270 kernel: Bridge firewalling registered Feb 13 19:33:01.980965 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:33:01.993537 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:01.997445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:01.999951 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:02.015117 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:02.017031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:02.018838 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:33:02.022103 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:33:02.033810 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:02.034316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:02.039637 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:02.048127 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:33:02.049319 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:02.053527 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:33:02.059576 dracut-cmdline[229]: dracut-dracut-053 Feb 13 19:33:02.062629 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:33:02.095547 systemd-resolved[235]: Positive Trust Anchors: Feb 13 19:33:02.095561 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:33:02.095592 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:33:02.098022 systemd-resolved[235]: Defaulting to hostname 'linux'. Feb 13 19:33:02.099056 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:33:02.106394 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:02.152020 kernel: SCSI subsystem initialized Feb 13 19:33:02.160999 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:33:02.171990 kernel: iscsi: registered transport (tcp) Feb 13 19:33:02.193985 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:33:02.194006 kernel: QLogic iSCSI HBA Driver Feb 13 19:33:02.251542 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:33:02.261087 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:33:02.286052 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:33:02.286122 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:33:02.287098 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:33:02.329997 kernel: raid6: avx2x4 gen() 29213 MB/s Feb 13 19:33:02.349983 kernel: raid6: avx2x2 gen() 30657 MB/s Feb 13 19:33:02.367153 kernel: raid6: avx2x1 gen() 25182 MB/s Feb 13 19:33:02.367232 kernel: raid6: using algorithm avx2x2 gen() 30657 MB/s Feb 13 19:33:02.385098 kernel: raid6: .... xor() 18864 MB/s, rmw enabled Feb 13 19:33:02.385124 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:33:02.407006 kernel: xor: automatically using best checksumming function avx Feb 13 19:33:02.565994 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:33:02.579516 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:33:02.593139 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:02.606052 systemd-udevd[415]: Using default interface naming scheme 'v255'. Feb 13 19:33:02.610591 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:02.620139 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:33:02.633988 dracut-pre-trigger[419]: rd.md=0: removing MD RAID activation Feb 13 19:33:02.671035 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:33:02.680122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:33:02.746497 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:02.757703 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:33:02.769754 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:33:02.774401 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:33:02.777381 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:02.780268 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:33:02.785093 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:33:02.803923 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:33:02.804088 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:33:02.804101 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:33:02.804112 kernel: GPT:9289727 != 19775487 Feb 13 19:33:02.804122 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:33:02.804139 kernel: GPT:9289727 != 19775487 Feb 13 19:33:02.804149 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:33:02.804159 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:02.794121 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:33:02.809899 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:33:02.817316 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:33:02.817351 kernel: AES CTR mode by8 optimization enabled Feb 13 19:33:02.817362 kernel: libata version 3.00 loaded. Feb 13 19:33:02.817121 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:33:02.817225 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:02.821488 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:02.826156 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:33:02.895301 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:33:02.895323 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:33:02.895480 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:33:02.895632 kernel: scsi host0: ahci Feb 13 19:33:02.895812 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (461) Feb 13 19:33:02.895830 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Feb 13 19:33:02.895843 kernel: scsi host1: ahci Feb 13 19:33:02.896024 kernel: scsi host2: ahci Feb 13 19:33:02.896166 kernel: scsi host3: ahci Feb 13 19:33:02.896327 kernel: scsi host4: ahci Feb 13 19:33:02.896471 kernel: scsi host5: ahci Feb 13 19:33:02.896616 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:33:02.896628 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:33:02.896639 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:33:02.896650 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:33:02.896660 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:33:02.896672 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:33:02.825922 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:33:02.826131 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:02.827723 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:02.838701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:02.861284 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:33:02.871366 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:33:02.876580 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:33:02.877929 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:33:02.884649 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:33:02.897129 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:33:02.920004 disk-uuid[566]: Primary Header is updated. Feb 13 19:33:02.920004 disk-uuid[566]: Secondary Entries is updated. Feb 13 19:33:02.920004 disk-uuid[566]: Secondary Header is updated. Feb 13 19:33:02.949476 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:02.949505 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:02.955287 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:02.985128 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:33:03.008846 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:03.206568 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:03.206659 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:03.206670 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:03.206680 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:33:03.207976 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:03.208978 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:33:03.208991 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:33:03.210065 kernel: ata3.00: applying bridge limits Feb 13 19:33:03.210977 kernel: ata3.00: configured for UDMA/100 Feb 13 19:33:03.213000 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:33:03.256547 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:33:03.273819 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:33:03.273841 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:33:03.929004 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:33:03.929526 disk-uuid[567]: The operation has completed successfully. Feb 13 19:33:03.958754 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:33:03.958904 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:33:03.988348 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:33:03.992146 sh[593]: Success Feb 13 19:33:04.006999 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:33:04.043236 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:33:04.057631 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:33:04.061203 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:33:04.072784 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:33:04.072825 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:04.072836 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:33:04.074668 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:33:04.074696 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:33:04.080841 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:33:04.083054 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:33:04.092107 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:33:04.094771 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:33:04.104250 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:04.104298 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:04.104309 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:04.107988 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:04.118283 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:33:04.120215 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:04.131040 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:33:04.141162 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:33:04.251503 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:33:04.262931 ignition[689]: Ignition 2.20.0 Feb 13 19:33:04.262941 ignition[689]: Stage: fetch-offline Feb 13 19:33:04.263133 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:33:04.264063 ignition[689]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:04.264081 ignition[689]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:04.264189 ignition[689]: parsed url from cmdline: "" Feb 13 19:33:04.264193 ignition[689]: no config URL provided Feb 13 19:33:04.264199 ignition[689]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:33:04.264210 ignition[689]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:33:04.264259 ignition[689]: op(1): [started] loading QEMU firmware config module Feb 13 19:33:04.264266 ignition[689]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:33:04.273783 ignition[689]: op(1): [finished] loading QEMU firmware config module Feb 13 19:33:04.292093 systemd-networkd[781]: lo: Link UP Feb 13 19:33:04.292104 systemd-networkd[781]: lo: Gained carrier Feb 13 19:33:04.293923 systemd-networkd[781]: Enumeration completed Feb 13 19:33:04.294306 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:04.294309 systemd-networkd[781]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:33:04.294630 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:33:04.295429 systemd-networkd[781]: eth0: Link UP Feb 13 19:33:04.295433 systemd-networkd[781]: eth0: Gained carrier Feb 13 19:33:04.295440 systemd-networkd[781]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:04.297894 systemd[1]: Reached target network.target - Network. Feb 13 19:33:04.313045 systemd-networkd[781]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:33:04.335316 ignition[689]: parsing config with SHA512: 4e7bb3a145416e9991c4862c4a100f07861ad06acf349d8091f93dcc3b2fb8c8b076ea174174326743d321df0ebcda4f5e5d37f2575b3994853c1d6c1ba37b5a Feb 13 19:33:04.341633 unknown[689]: fetched base config from "system" Feb 13 19:33:04.341672 unknown[689]: fetched user config from "qemu" Feb 13 19:33:04.344156 ignition[689]: fetch-offline: fetch-offline passed Feb 13 19:33:04.345293 ignition[689]: Ignition finished successfully Feb 13 19:33:04.348064 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:33:04.349743 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:33:04.361145 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:33:04.382048 ignition[787]: Ignition 2.20.0 Feb 13 19:33:04.382062 ignition[787]: Stage: kargs Feb 13 19:33:04.382267 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:04.382281 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:04.383285 ignition[787]: kargs: kargs passed Feb 13 19:33:04.383341 ignition[787]: Ignition finished successfully Feb 13 19:33:04.386893 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:33:04.402140 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:33:04.416324 ignition[796]: Ignition 2.20.0 Feb 13 19:33:04.416340 ignition[796]: Stage: disks Feb 13 19:33:04.416564 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:04.416581 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:04.420125 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:33:04.417703 ignition[796]: disks: disks passed Feb 13 19:33:04.421838 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:33:04.417763 ignition[796]: Ignition finished successfully Feb 13 19:33:04.424054 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:33:04.426127 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:33:04.428429 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:33:04.430887 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:33:04.441112 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:33:04.454832 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:33:04.462426 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:33:04.481226 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:33:04.612038 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:33:04.613285 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:33:04.615254 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:33:04.626074 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:33:04.628339 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:33:04.629838 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:33:04.635332 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Feb 13 19:33:04.635358 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:04.629903 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:33:04.642565 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:04.642586 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:04.642596 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:04.629933 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:33:04.638575 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:33:04.643633 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:33:04.646150 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:33:04.687085 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:33:04.705284 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:33:04.710151 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:33:04.715632 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:33:04.816756 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:33:04.829058 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:33:04.830793 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:33:04.838903 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:04.859733 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:33:04.868111 ignition[927]: INFO : Ignition 2.20.0 Feb 13 19:33:04.868111 ignition[927]: INFO : Stage: mount Feb 13 19:33:04.883114 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:04.883114 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:04.883114 ignition[927]: INFO : mount: mount passed Feb 13 19:33:04.883114 ignition[927]: INFO : Ignition finished successfully Feb 13 19:33:04.885349 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:33:04.894178 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:33:05.072748 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:33:05.081263 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:33:05.090643 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (941) Feb 13 19:33:05.090690 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:33:05.090718 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:33:05.091715 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:33:05.095992 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:33:05.097808 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:33:05.141808 ignition[958]: INFO : Ignition 2.20.0 Feb 13 19:33:05.141808 ignition[958]: INFO : Stage: files Feb 13 19:33:05.144357 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:05.144357 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:05.144357 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:33:05.144357 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:33:05.144357 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:33:05.152491 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:33:05.152491 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:33:05.152491 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:33:05.152491 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:33:05.152491 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Feb 13 19:33:05.147849 unknown[958]: wrote ssh authorized keys file for user: core Feb 13 19:33:05.226191 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:33:05.389749 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:33:05.392207 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:33:05.407419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:33:05.407419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:33:05.407419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:05.407419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:05.407419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:05.407419 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 19:33:05.654350 systemd-networkd[781]: eth0: Gained IPv6LL Feb 13 19:33:05.786624 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:33:06.346406 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 19:33:06.346406 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:33:06.350521 ignition[958]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:33:06.400204 ignition[958]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:33:06.404720 ignition[958]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:33:06.407221 ignition[958]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:33:06.407221 ignition[958]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:33:06.410982 ignition[958]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:33:06.413071 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:33:06.415559 ignition[958]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:33:06.417813 ignition[958]: INFO : files: files passed Feb 13 19:33:06.418823 ignition[958]: INFO : Ignition finished successfully Feb 13 19:33:06.423663 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:33:06.440147 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:33:06.441032 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:33:06.449650 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:33:06.449782 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:33:06.453303 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:33:06.454891 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:06.454891 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:06.459572 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:33:06.458223 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:33:06.459743 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:33:06.482144 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:33:06.506479 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:33:06.506607 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:33:06.507928 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:33:06.510216 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:33:06.512355 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:33:06.513236 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:33:06.533650 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:33:06.547105 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:33:06.560121 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:06.561793 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:06.562404 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:33:06.562877 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:33:06.562990 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:33:06.571307 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:33:06.574541 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:33:06.575994 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:33:06.580031 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:33:06.581719 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:33:06.582495 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:33:06.582952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:33:06.583754 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:33:06.584469 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:33:06.584949 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:33:06.585695 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:33:06.585892 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:33:06.602593 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:06.603872 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:06.604693 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:33:06.604878 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:06.610852 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:33:06.611020 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:33:06.614702 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:33:06.614857 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:33:06.617469 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:33:06.619669 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:33:06.624079 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:06.625062 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:33:06.625636 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:33:06.626070 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:33:06.626226 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:33:06.626694 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:33:06.626842 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:33:06.635214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:33:06.635362 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:33:06.636438 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:33:06.636639 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:33:06.655119 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:33:06.655955 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:33:06.658250 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:33:06.658367 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:06.659515 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:33:06.659624 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:33:06.668814 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:33:06.668937 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:33:06.687133 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:33:06.689665 ignition[1014]: INFO : Ignition 2.20.0 Feb 13 19:33:06.689665 ignition[1014]: INFO : Stage: umount Feb 13 19:33:06.691475 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:33:06.691475 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:33:06.691475 ignition[1014]: INFO : umount: umount passed Feb 13 19:33:06.691475 ignition[1014]: INFO : Ignition finished successfully Feb 13 19:33:06.693682 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:33:06.693803 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:33:06.694779 systemd[1]: Stopped target network.target - Network. Feb 13 19:33:06.697426 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:33:06.697478 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:33:06.697708 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:33:06.697749 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:33:06.698249 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:33:06.698290 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:33:06.698579 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:33:06.698620 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:33:06.699055 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:33:06.699489 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:33:06.711613 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:33:06.711746 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:33:06.714045 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:33:06.714105 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:06.718116 systemd-networkd[781]: eth0: DHCPv6 lease lost Feb 13 19:33:06.721411 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:33:06.721619 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:33:06.724229 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:33:06.724280 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:06.730068 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:33:06.730157 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:33:06.730210 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:33:06.730537 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:33:06.730581 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:06.730861 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:33:06.730906 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:06.731660 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:06.746728 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:33:06.746902 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:33:06.752908 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:33:06.753113 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:06.754242 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:33:06.754292 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:06.756388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:33:06.756431 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:06.756695 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:33:06.756742 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:33:06.757578 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:33:06.757636 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:33:06.764669 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:33:06.764726 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:33:06.772168 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:33:06.773285 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:33:06.773356 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:06.775764 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:33:06.775831 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:06.777122 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:33:06.777175 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:06.779378 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:33:06.779435 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:06.782151 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:33:06.782279 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:33:06.911134 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:33:06.911286 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:33:06.912496 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:33:06.915073 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:33:06.915125 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:33:06.928105 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:33:06.937226 systemd[1]: Switching root. Feb 13 19:33:06.969687 systemd-journald[194]: Journal stopped Feb 13 19:33:08.217836 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:33:08.217919 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:33:08.217942 kernel: SELinux: policy capability open_perms=1 Feb 13 19:33:08.217993 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:33:08.218007 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:33:08.218019 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:33:08.218033 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:33:08.218045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:33:08.218062 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:33:08.218073 kernel: audit: type=1403 audit(1739475187.401:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:33:08.218086 systemd[1]: Successfully loaded SELinux policy in 45.485ms. Feb 13 19:33:08.218108 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.568ms. Feb 13 19:33:08.218126 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:33:08.218143 systemd[1]: Detected virtualization kvm. Feb 13 19:33:08.218163 systemd[1]: Detected architecture x86-64. Feb 13 19:33:08.218183 systemd[1]: Detected first boot. Feb 13 19:33:08.218201 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:33:08.218216 zram_generator::config[1059]: No configuration found. Feb 13 19:33:08.218235 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:33:08.218247 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:33:08.218261 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:33:08.218273 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:33:08.218286 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:33:08.218300 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:33:08.218313 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:33:08.218325 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:33:08.218338 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:33:08.218350 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:33:08.218363 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:33:08.218375 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:33:08.218387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:33:08.218402 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:33:08.218414 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:33:08.218426 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:33:08.218439 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:33:08.218452 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:33:08.218465 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:33:08.218477 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:33:08.218489 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:33:08.218506 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:33:08.218521 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:33:08.218533 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:33:08.218545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:33:08.218557 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:33:08.218569 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:33:08.218581 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:33:08.218594 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:33:08.218606 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:33:08.218620 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:33:08.218633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:33:08.218645 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:33:08.218658 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:33:08.218670 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:33:08.218682 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:33:08.218694 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:33:08.218706 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:08.218720 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:33:08.218735 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:33:08.218747 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:33:08.218769 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:33:08.218782 systemd[1]: Reached target machines.target - Containers. Feb 13 19:33:08.218794 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:33:08.218806 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:08.218819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:33:08.218831 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:33:08.218846 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:08.218858 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:33:08.218870 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:08.218883 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:33:08.218895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:08.218908 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:33:08.218920 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:33:08.218932 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:33:08.218945 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:33:08.218984 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:33:08.218997 kernel: fuse: init (API version 7.39) Feb 13 19:33:08.219009 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:33:08.219022 kernel: loop: module loaded Feb 13 19:33:08.219034 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:33:08.219070 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:33:08.219101 systemd-journald[1129]: Collecting audit messages is disabled. Feb 13 19:33:08.219124 kernel: ACPI: bus type drm_connector registered Feb 13 19:33:08.219140 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:33:08.219152 systemd-journald[1129]: Journal started Feb 13 19:33:08.219176 systemd-journald[1129]: Runtime Journal (/run/log/journal/57adda220a3e4a44941a03c1dc52ed1c) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:33:07.963449 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:33:07.982332 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:33:07.982827 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:33:08.223052 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:33:08.224979 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:33:08.225000 systemd[1]: Stopped verity-setup.service. Feb 13 19:33:08.227980 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:08.230981 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:33:08.232277 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:33:08.233505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:33:08.234723 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:33:08.235856 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:33:08.237101 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:33:08.238332 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:33:08.239607 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:33:08.241081 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:33:08.242657 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:33:08.242838 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:33:08.244318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:08.244489 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:08.246137 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:33:08.246311 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:33:08.247702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:08.247883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:08.249438 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:33:08.249609 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:33:08.251056 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:08.251224 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:08.252651 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:33:08.254087 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:33:08.255865 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:33:08.271427 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:33:08.282039 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:33:08.284639 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:33:08.285801 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:33:08.285838 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:33:08.287858 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:33:08.290210 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:33:08.291415 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:33:08.293261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:08.294734 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:33:08.297857 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:33:08.299214 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:33:08.300918 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:33:08.302155 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:33:08.316235 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:33:08.321554 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:33:08.326177 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:33:08.329709 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:33:08.332744 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:33:08.334449 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:33:08.349640 kernel: loop0: detected capacity change from 0 to 141000 Feb 13 19:33:08.365702 systemd-journald[1129]: Time spent on flushing to /var/log/journal/57adda220a3e4a44941a03c1dc52ed1c is 31.820ms for 956 entries. Feb 13 19:33:08.365702 systemd-journald[1129]: System Journal (/var/log/journal/57adda220a3e4a44941a03c1dc52ed1c) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:33:08.409997 systemd-journald[1129]: Received client request to flush runtime journal. Feb 13 19:33:08.410064 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:33:08.361764 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:33:08.364617 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:33:08.377254 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:33:08.378890 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:33:08.384230 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:33:08.412559 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:33:08.414831 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:33:08.420064 udevadm[1184]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:33:08.421213 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 19:33:08.422056 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Feb 13 19:33:08.430332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:33:08.438327 kernel: loop1: detected capacity change from 0 to 218376 Feb 13 19:33:08.438281 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:33:08.444949 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:33:08.445728 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:33:08.476697 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:33:08.485014 kernel: loop2: detected capacity change from 0 to 138184 Feb 13 19:33:08.490321 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:33:08.525997 kernel: loop3: detected capacity change from 0 to 141000 Feb 13 19:33:08.531477 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Feb 13 19:33:08.531508 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Feb 13 19:33:08.540234 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:33:08.545283 kernel: loop4: detected capacity change from 0 to 218376 Feb 13 19:33:08.553981 kernel: loop5: detected capacity change from 0 to 138184 Feb 13 19:33:08.564505 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:33:08.565170 (sd-merge)[1200]: Merged extensions into '/usr'. Feb 13 19:33:08.612922 systemd[1]: Reloading requested from client PID 1173 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:33:08.612941 systemd[1]: Reloading... Feb 13 19:33:08.729988 zram_generator::config[1230]: No configuration found. Feb 13 19:33:08.900437 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:08.901194 ldconfig[1168]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:33:08.964356 systemd[1]: Reloading finished in 350 ms. Feb 13 19:33:09.001540 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:33:09.003132 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:33:09.017133 systemd[1]: Starting ensure-sysext.service... Feb 13 19:33:09.019226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:33:09.027878 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:33:09.027887 systemd[1]: Reloading... Feb 13 19:33:09.049703 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:33:09.050341 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:33:09.051404 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:33:09.051772 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 19:33:09.051904 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Feb 13 19:33:09.057761 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:33:09.057775 systemd-tmpfiles[1265]: Skipping /boot Feb 13 19:33:09.077818 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:33:09.077941 systemd-tmpfiles[1265]: Skipping /boot Feb 13 19:33:09.129991 zram_generator::config[1294]: No configuration found. Feb 13 19:33:09.248046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:09.298916 systemd[1]: Reloading finished in 270 ms. Feb 13 19:33:09.322604 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:33:09.345539 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:33:09.356238 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:09.358791 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:33:09.361218 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:33:09.366285 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:33:09.370264 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:33:09.377224 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:33:09.381067 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:09.381276 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:09.382625 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:09.385588 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:09.392199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:09.393701 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:09.393822 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:09.399017 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:33:09.401621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:09.401849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:09.404009 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:09.404322 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:09.417938 systemd-udevd[1335]: Using default interface naming scheme 'v255'. Feb 13 19:33:09.420713 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:09.420940 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:09.425215 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:33:09.432640 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:09.433073 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:33:09.437111 augenrules[1364]: No rules Feb 13 19:33:09.441365 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:33:09.444328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:33:09.447228 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:33:09.458036 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:33:09.459259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:33:09.462175 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:33:09.467251 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:33:09.468350 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:33:09.470397 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:33:09.473675 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:09.473900 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:09.475531 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:33:09.477712 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:33:09.477902 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:33:09.479948 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:33:09.480159 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:33:09.485422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:33:09.485595 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:33:09.487602 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:33:09.487781 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:33:09.497157 systemd[1]: Finished ensure-sysext.service. Feb 13 19:33:09.517174 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:33:09.518380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:33:09.518476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:33:09.523679 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:33:09.526127 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:33:09.526593 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:33:09.528150 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:33:09.534143 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:33:09.650980 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:33:09.655991 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1387) Feb 13 19:33:09.656021 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:33:09.678010 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:33:09.693599 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:33:09.693838 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:33:09.737996 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 19:33:09.782318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:33:09.805465 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:33:09.822500 systemd-networkd[1402]: lo: Link UP Feb 13 19:33:09.860044 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:33:09.822517 systemd-networkd[1402]: lo: Gained carrier Feb 13 19:33:09.824181 systemd-networkd[1402]: Enumeration completed Feb 13 19:33:09.849344 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:33:09.849480 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:33:09.850075 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:33:09.850378 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:33:09.852553 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:33:09.857283 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:09.857288 systemd-networkd[1402]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:33:09.858051 systemd-networkd[1402]: eth0: Link UP Feb 13 19:33:09.858055 systemd-networkd[1402]: eth0: Gained carrier Feb 13 19:33:09.858067 systemd-networkd[1402]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:33:09.872436 kernel: kvm_amd: TSC scaling supported Feb 13 19:33:09.872532 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:33:09.872559 kernel: kvm_amd: Nested Paging enabled Feb 13 19:33:09.872582 kernel: kvm_amd: LBR virtualization supported Feb 13 19:33:09.872606 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:33:09.872628 kernel: kvm_amd: Virtual GIF supported Feb 13 19:33:09.874682 systemd-resolved[1333]: Positive Trust Anchors: Feb 13 19:33:09.875233 systemd-resolved[1333]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:33:09.875357 systemd-resolved[1333]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:33:09.885455 systemd-resolved[1333]: Defaulting to hostname 'linux'. Feb 13 19:33:09.886132 systemd-networkd[1402]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:33:09.887158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:33:09.887932 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Feb 13 19:33:09.888809 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:33:09.888853 systemd-timesyncd[1404]: Initial clock synchronization to Thu 2025-02-13 19:33:09.985243 UTC. Feb 13 19:33:09.890690 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:33:09.890841 systemd[1]: Reached target network.target - Network. Feb 13 19:33:09.891766 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:33:09.901999 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:33:09.992137 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:33:10.019130 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:33:10.020683 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:33:10.031383 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:33:10.070940 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:33:10.072609 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:33:10.073774 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:33:10.074986 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:33:10.076265 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:33:10.077789 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:33:10.079009 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:33:10.080352 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:33:10.081655 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:33:10.081683 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:33:10.082620 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:33:10.084550 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:33:10.087478 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:33:10.095738 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:33:10.098320 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:33:10.100080 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:33:10.101299 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:33:10.102330 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:33:10.103320 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:33:10.103361 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:33:10.104445 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:33:10.106592 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:33:10.111206 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:33:10.113607 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:33:10.115183 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:33:10.116990 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:33:10.119217 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:33:10.121303 jq[1441]: false Feb 13 19:33:10.122200 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:33:10.126672 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:33:10.130194 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:33:10.137894 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:33:10.139525 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:33:10.140042 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:33:10.140741 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:33:10.141845 extend-filesystems[1442]: Found loop3 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found loop4 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found loop5 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found sr0 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda1 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda2 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda3 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found usr Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda4 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda6 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda7 Feb 13 19:33:10.142837 extend-filesystems[1442]: Found vda9 Feb 13 19:33:10.156571 extend-filesystems[1442]: Checking size of /dev/vda9 Feb 13 19:33:10.147300 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:33:10.144852 dbus-daemon[1440]: [system] SELinux support is enabled Feb 13 19:33:10.153143 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:33:10.168010 extend-filesystems[1442]: Resized partition /dev/vda9 Feb 13 19:33:10.169132 update_engine[1450]: I20250213 19:33:10.168640 1450 main.cc:92] Flatcar Update Engine starting Feb 13 19:33:10.170513 update_engine[1450]: I20250213 19:33:10.170290 1450 update_check_scheduler.cc:74] Next update check in 6m16s Feb 13 19:33:10.170197 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:33:10.170408 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:33:10.171366 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:33:10.172327 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:33:10.180350 extend-filesystems[1464]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:33:10.213890 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:33:10.213923 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1377) Feb 13 19:33:10.214423 jq[1453]: true Feb 13 19:33:10.183134 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:33:10.196183 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:33:10.216455 jq[1470]: true Feb 13 19:33:10.228193 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:33:10.196431 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:33:10.248226 tar[1465]: linux-amd64/LICENSE Feb 13 19:33:10.250810 extend-filesystems[1464]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:33:10.250810 extend-filesystems[1464]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:33:10.250810 extend-filesystems[1464]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:33:10.203943 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:33:10.270806 tar[1465]: linux-amd64/helm Feb 13 19:33:10.270934 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Feb 13 19:33:10.204104 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:33:10.207050 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:33:10.207074 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:33:10.209262 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:33:10.209685 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:33:10.236465 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:33:10.248981 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:33:10.249005 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:33:10.249262 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:33:10.252084 systemd-logind[1449]: New seat seat0. Feb 13 19:33:10.254722 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:33:10.258342 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:33:10.258604 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:33:10.277896 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:33:10.281993 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:33:10.284953 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:33:10.314559 locksmithd[1495]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:33:10.410709 sshd_keygen[1457]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:33:10.456789 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:33:10.467344 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:33:10.473142 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:43442.service - OpenSSH per-connection server daemon (10.0.0.1:43442). Feb 13 19:33:10.479801 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:33:10.481430 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:33:10.490515 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:33:10.531691 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:33:10.540300 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:33:10.551380 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:33:10.552888 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:33:10.574385 sshd[1518]: Accepted publickey for core from 10.0.0.1 port 43442 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:10.626921 sshd-session[1518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:10.628663 containerd[1471]: time="2025-02-13T19:33:10.628503825Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:33:10.637540 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:33:10.650108 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:33:10.655205 systemd-logind[1449]: New session 1 of user core. Feb 13 19:33:10.659756 containerd[1471]: time="2025-02-13T19:33:10.659707204Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:10.662127 containerd[1471]: time="2025-02-13T19:33:10.662045876Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:10.662207 containerd[1471]: time="2025-02-13T19:33:10.662188457Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:33:10.662303 containerd[1471]: time="2025-02-13T19:33:10.662284219Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:33:10.662582 containerd[1471]: time="2025-02-13T19:33:10.662562134Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:33:10.662656 containerd[1471]: time="2025-02-13T19:33:10.662640913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:10.662828 containerd[1471]: time="2025-02-13T19:33:10.662803360Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:10.662914 containerd[1471]: time="2025-02-13T19:33:10.662896230Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:10.663242 containerd[1471]: time="2025-02-13T19:33:10.663216024Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:10.663316 containerd[1471]: time="2025-02-13T19:33:10.663299137Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:10.663385 containerd[1471]: time="2025-02-13T19:33:10.663368815Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:10.663462 containerd[1471]: time="2025-02-13T19:33:10.663445841Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:10.663645 containerd[1471]: time="2025-02-13T19:33:10.663624716Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:10.664087 containerd[1471]: time="2025-02-13T19:33:10.664064705Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:33:10.664330 containerd[1471]: time="2025-02-13T19:33:10.664304782Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:33:10.664398 containerd[1471]: time="2025-02-13T19:33:10.664383531Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:33:10.664586 containerd[1471]: time="2025-02-13T19:33:10.664565381Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:33:10.664743 containerd[1471]: time="2025-02-13T19:33:10.664721205Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:33:10.671144 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:33:10.671776 containerd[1471]: time="2025-02-13T19:33:10.671718083Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:33:10.671925 containerd[1471]: time="2025-02-13T19:33:10.671898834Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:33:10.672052 containerd[1471]: time="2025-02-13T19:33:10.672027959Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:33:10.672120 containerd[1471]: time="2025-02-13T19:33:10.672107827Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:33:10.672247 containerd[1471]: time="2025-02-13T19:33:10.672180730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:33:10.672811 containerd[1471]: time="2025-02-13T19:33:10.672481403Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:33:10.672867 containerd[1471]: time="2025-02-13T19:33:10.672787932Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:33:10.673130 containerd[1471]: time="2025-02-13T19:33:10.673102001Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:33:10.673197 containerd[1471]: time="2025-02-13T19:33:10.673184480Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:33:10.673267 containerd[1471]: time="2025-02-13T19:33:10.673254509Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:33:10.673343 containerd[1471]: time="2025-02-13T19:33:10.673308736Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673412 containerd[1471]: time="2025-02-13T19:33:10.673389732Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673475 containerd[1471]: time="2025-02-13T19:33:10.673462797Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673549 containerd[1471]: time="2025-02-13T19:33:10.673536244Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673640 containerd[1471]: time="2025-02-13T19:33:10.673624418Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673764 containerd[1471]: time="2025-02-13T19:33:10.673699800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673764 containerd[1471]: time="2025-02-13T19:33:10.673719635Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673764 containerd[1471]: time="2025-02-13T19:33:10.673740994Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:33:10.673913 containerd[1471]: time="2025-02-13T19:33:10.673860785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.673913 containerd[1471]: time="2025-02-13T19:33:10.673881034Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.673913 containerd[1471]: time="2025-02-13T19:33:10.673893432Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674174 containerd[1471]: time="2025-02-13T19:33:10.674032374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674174 containerd[1471]: time="2025-02-13T19:33:10.674054155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674174 containerd[1471]: time="2025-02-13T19:33:10.674068185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674174 containerd[1471]: time="2025-02-13T19:33:10.674080018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674174 containerd[1471]: time="2025-02-13T19:33:10.674115033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674174 containerd[1471]: time="2025-02-13T19:33:10.674141350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674394 containerd[1471]: time="2025-02-13T19:33:10.674158778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674394 containerd[1471]: time="2025-02-13T19:33:10.674341051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674394 containerd[1471]: time="2025-02-13T19:33:10.674353508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674394 containerd[1471]: time="2025-02-13T19:33:10.674365069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674650 containerd[1471]: time="2025-02-13T19:33:10.674378837Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:33:10.674650 containerd[1471]: time="2025-02-13T19:33:10.674529743Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674650 containerd[1471]: time="2025-02-13T19:33:10.674544278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674650 containerd[1471]: time="2025-02-13T19:33:10.674570987Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:33:10.674828 containerd[1471]: time="2025-02-13T19:33:10.674760225Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:33:10.674828 containerd[1471]: time="2025-02-13T19:33:10.674785474Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:33:10.674828 containerd[1471]: time="2025-02-13T19:33:10.674796117Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:33:10.674972 containerd[1471]: time="2025-02-13T19:33:10.674808747Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:33:10.674972 containerd[1471]: time="2025-02-13T19:33:10.674919911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.674972 containerd[1471]: time="2025-02-13T19:33:10.674938869Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:33:10.674972 containerd[1471]: time="2025-02-13T19:33:10.674949190Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:33:10.675138 containerd[1471]: time="2025-02-13T19:33:10.675070423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:33:10.675583 containerd[1471]: time="2025-02-13T19:33:10.675526529Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:33:10.676072 containerd[1471]: time="2025-02-13T19:33:10.675844498Z" level=info msg="Connect containerd service" Feb 13 19:33:10.676072 containerd[1471]: time="2025-02-13T19:33:10.675918540Z" level=info msg="using legacy CRI server" Feb 13 19:33:10.676072 containerd[1471]: time="2025-02-13T19:33:10.675929314Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:33:10.676230 containerd[1471]: time="2025-02-13T19:33:10.676212853Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:33:10.677167 containerd[1471]: time="2025-02-13T19:33:10.677020129Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:33:10.677297 containerd[1471]: time="2025-02-13T19:33:10.677208379Z" level=info msg="Start subscribing containerd event" Feb 13 19:33:10.677398 containerd[1471]: time="2025-02-13T19:33:10.677350173Z" level=info msg="Start recovering state" Feb 13 19:33:10.677575 containerd[1471]: time="2025-02-13T19:33:10.677548442Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:33:10.677609 containerd[1471]: time="2025-02-13T19:33:10.677600279Z" level=info msg="Start event monitor" Feb 13 19:33:10.677788 containerd[1471]: time="2025-02-13T19:33:10.677742749Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:33:10.678338 containerd[1471]: time="2025-02-13T19:33:10.678301048Z" level=info msg="Start snapshots syncer" Feb 13 19:33:10.678338 containerd[1471]: time="2025-02-13T19:33:10.678337332Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:33:10.678401 containerd[1471]: time="2025-02-13T19:33:10.678348873Z" level=info msg="Start streaming server" Feb 13 19:33:10.679053 containerd[1471]: time="2025-02-13T19:33:10.678426261Z" level=info msg="containerd successfully booted in 0.111863s" Feb 13 19:33:10.682333 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:33:10.684420 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:33:10.694696 (systemd)[1533]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:33:10.912371 systemd[1533]: Queued start job for default target default.target. Feb 13 19:33:10.922336 tar[1465]: linux-amd64/README.md Feb 13 19:33:10.923509 systemd[1533]: Created slice app.slice - User Application Slice. Feb 13 19:33:10.923545 systemd[1533]: Reached target paths.target - Paths. Feb 13 19:33:10.923560 systemd[1533]: Reached target timers.target - Timers. Feb 13 19:33:10.925206 systemd[1533]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:33:11.002388 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:33:11.006349 systemd[1533]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:33:11.006498 systemd[1533]: Reached target sockets.target - Sockets. Feb 13 19:33:11.006517 systemd[1533]: Reached target basic.target - Basic System. Feb 13 19:33:11.006556 systemd[1533]: Reached target default.target - Main User Target. Feb 13 19:33:11.006592 systemd[1533]: Startup finished in 301ms. Feb 13 19:33:11.006958 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:33:11.018102 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:33:11.080474 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:43454.service - OpenSSH per-connection server daemon (10.0.0.1:43454). Feb 13 19:33:11.095155 systemd-networkd[1402]: eth0: Gained IPv6LL Feb 13 19:33:11.099227 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:33:11.101284 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:33:11.118241 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:33:11.120956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:11.123320 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:33:11.144502 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:33:11.146578 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:33:11.146785 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:33:11.150045 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:33:11.152213 sshd[1547]: Accepted publickey for core from 10.0.0.1 port 43454 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:11.153909 sshd-session[1547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:11.158422 systemd-logind[1449]: New session 2 of user core. Feb 13 19:33:11.178103 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:33:11.233600 sshd[1566]: Connection closed by 10.0.0.1 port 43454 Feb 13 19:33:11.234015 sshd-session[1547]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:11.245848 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:43454.service: Deactivated successfully. Feb 13 19:33:11.247714 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:33:11.249275 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:33:11.260237 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:43462.service - OpenSSH per-connection server daemon (10.0.0.1:43462). Feb 13 19:33:11.262727 systemd-logind[1449]: Removed session 2. Feb 13 19:33:11.296423 sshd[1571]: Accepted publickey for core from 10.0.0.1 port 43462 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:11.297861 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:11.301620 systemd-logind[1449]: New session 3 of user core. Feb 13 19:33:11.313107 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:33:11.369085 sshd[1573]: Connection closed by 10.0.0.1 port 43462 Feb 13 19:33:11.369467 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:11.372889 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:43462.service: Deactivated successfully. Feb 13 19:33:11.374620 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:33:11.375226 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:33:11.376020 systemd-logind[1449]: Removed session 3. Feb 13 19:33:12.503324 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:12.505387 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:33:12.507827 systemd[1]: Startup finished in 1.020s (kernel) + 5.667s (initrd) + 5.149s (userspace) = 11.836s. Feb 13 19:33:12.526339 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:12.555655 agetty[1526]: failed to open credentials directory Feb 13 19:33:12.555813 agetty[1524]: failed to open credentials directory Feb 13 19:33:13.254225 kubelet[1582]: E0213 19:33:13.254145 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:13.259142 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:13.259365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:13.259724 systemd[1]: kubelet.service: Consumed 1.960s CPU time. Feb 13 19:33:21.427462 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:41750.service - OpenSSH per-connection server daemon (10.0.0.1:41750). Feb 13 19:33:21.471497 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 41750 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:21.473472 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:21.478134 systemd-logind[1449]: New session 4 of user core. Feb 13 19:33:21.488095 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:33:21.544503 sshd[1597]: Connection closed by 10.0.0.1 port 41750 Feb 13 19:33:21.544883 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:21.559780 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:41750.service: Deactivated successfully. Feb 13 19:33:21.561596 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:33:21.563039 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:33:21.576277 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:41752.service - OpenSSH per-connection server daemon (10.0.0.1:41752). Feb 13 19:33:21.577236 systemd-logind[1449]: Removed session 4. Feb 13 19:33:21.615281 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 41752 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:21.617148 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:21.621341 systemd-logind[1449]: New session 5 of user core. Feb 13 19:33:21.629118 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:33:21.677612 sshd[1604]: Connection closed by 10.0.0.1 port 41752 Feb 13 19:33:21.677883 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:21.692750 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:41752.service: Deactivated successfully. Feb 13 19:33:21.694549 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:33:21.695928 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:33:21.705242 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:41754.service - OpenSSH per-connection server daemon (10.0.0.1:41754). Feb 13 19:33:21.706241 systemd-logind[1449]: Removed session 5. Feb 13 19:33:21.743363 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 41754 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:21.744773 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:21.748752 systemd-logind[1449]: New session 6 of user core. Feb 13 19:33:21.762086 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:33:21.815629 sshd[1611]: Connection closed by 10.0.0.1 port 41754 Feb 13 19:33:21.816026 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:21.827569 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:41754.service: Deactivated successfully. Feb 13 19:33:21.829223 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:33:21.830497 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:33:21.831733 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:41758.service - OpenSSH per-connection server daemon (10.0.0.1:41758). Feb 13 19:33:21.832416 systemd-logind[1449]: Removed session 6. Feb 13 19:33:21.873036 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 41758 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:21.874399 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:21.878196 systemd-logind[1449]: New session 7 of user core. Feb 13 19:33:21.893070 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:33:21.949252 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:33:21.949570 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:21.968739 sudo[1619]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:21.970139 sshd[1618]: Connection closed by 10.0.0.1 port 41758 Feb 13 19:33:21.970579 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:21.983636 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:41758.service: Deactivated successfully. Feb 13 19:33:21.985392 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:33:21.986768 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:33:21.988182 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:41772.service - OpenSSH per-connection server daemon (10.0.0.1:41772). Feb 13 19:33:21.989013 systemd-logind[1449]: Removed session 7. Feb 13 19:33:22.029799 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 41772 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:22.031337 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:22.035816 systemd-logind[1449]: New session 8 of user core. Feb 13 19:33:22.046220 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:33:22.100514 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:33:22.100868 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:22.105106 sudo[1628]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:22.111591 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:33:22.111938 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:22.132327 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:33:22.164275 augenrules[1650]: No rules Feb 13 19:33:22.165381 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:33:22.165616 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:33:22.166889 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 19:33:22.168453 sshd[1626]: Connection closed by 10.0.0.1 port 41772 Feb 13 19:33:22.168840 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Feb 13 19:33:22.180218 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:41772.service: Deactivated successfully. Feb 13 19:33:22.182897 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:33:22.185247 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:33:22.200398 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:41786.service - OpenSSH per-connection server daemon (10.0.0.1:41786). Feb 13 19:33:22.201523 systemd-logind[1449]: Removed session 8. Feb 13 19:33:22.236329 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 41786 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:33:22.237610 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:33:22.242293 systemd-logind[1449]: New session 9 of user core. Feb 13 19:33:22.251149 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:33:22.305262 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:33:22.305586 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:33:22.849247 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:33:22.849379 (dockerd)[1683]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:33:23.510009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:33:23.519957 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:23.605493 dockerd[1683]: time="2025-02-13T19:33:23.605363391Z" level=info msg="Starting up" Feb 13 19:33:23.815040 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:23.820531 (kubelet)[1715]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:24.057879 kubelet[1715]: E0213 19:33:24.057825 1715 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:24.065403 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:24.065654 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:24.084428 dockerd[1683]: time="2025-02-13T19:33:24.084383253Z" level=info msg="Loading containers: start." Feb 13 19:33:24.268994 kernel: Initializing XFRM netlink socket Feb 13 19:33:24.356302 systemd-networkd[1402]: docker0: Link UP Feb 13 19:33:24.400070 dockerd[1683]: time="2025-02-13T19:33:24.400013272Z" level=info msg="Loading containers: done." Feb 13 19:33:24.416657 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1470536064-merged.mount: Deactivated successfully. Feb 13 19:33:24.419807 dockerd[1683]: time="2025-02-13T19:33:24.419737619Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:33:24.419930 dockerd[1683]: time="2025-02-13T19:33:24.419876346Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:33:24.420054 dockerd[1683]: time="2025-02-13T19:33:24.420025550Z" level=info msg="Daemon has completed initialization" Feb 13 19:33:24.587127 dockerd[1683]: time="2025-02-13T19:33:24.587030815Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:33:24.587402 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:33:25.452415 containerd[1471]: time="2025-02-13T19:33:25.452341339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\"" Feb 13 19:33:26.990046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2977381720.mount: Deactivated successfully. Feb 13 19:33:31.729147 containerd[1471]: time="2025-02-13T19:33:31.729071292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:31.729986 containerd[1471]: time="2025-02-13T19:33:31.729866376Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.2: active requests=0, bytes read=28673931" Feb 13 19:33:31.731623 containerd[1471]: time="2025-02-13T19:33:31.731565365Z" level=info msg="ImageCreate event name:\"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:31.735367 containerd[1471]: time="2025-02-13T19:33:31.735317679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:31.736606 containerd[1471]: time="2025-02-13T19:33:31.736555951Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.2\" with image id \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f\", size \"28670731\" in 6.284137813s" Feb 13 19:33:31.736668 containerd[1471]: time="2025-02-13T19:33:31.736612284Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.2\" returns image reference \"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef\"" Feb 13 19:33:31.737576 containerd[1471]: time="2025-02-13T19:33:31.737530379Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\"" Feb 13 19:33:33.435468 containerd[1471]: time="2025-02-13T19:33:33.435354140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:33.436937 containerd[1471]: time="2025-02-13T19:33:33.436882018Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.2: active requests=0, bytes read=24771784" Feb 13 19:33:33.439449 containerd[1471]: time="2025-02-13T19:33:33.439400824Z" level=info msg="ImageCreate event name:\"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:33.443314 containerd[1471]: time="2025-02-13T19:33:33.443233785Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:33.444681 containerd[1471]: time="2025-02-13T19:33:33.444616041Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.2\" with image id \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90\", size \"26259392\" in 1.707040339s" Feb 13 19:33:33.444681 containerd[1471]: time="2025-02-13T19:33:33.444677952Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.2\" returns image reference \"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389\"" Feb 13 19:33:33.445389 containerd[1471]: time="2025-02-13T19:33:33.445295772Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\"" Feb 13 19:33:34.316190 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:33:34.334317 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:34.564825 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:34.570677 (kubelet)[1964]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:34.613327 kubelet[1964]: E0213 19:33:34.613262 1964 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:34.617568 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:34.617843 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:37.528752 containerd[1471]: time="2025-02-13T19:33:37.528684899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:37.567179 containerd[1471]: time="2025-02-13T19:33:37.567086533Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.2: active requests=0, bytes read=19170276" Feb 13 19:33:37.590844 containerd[1471]: time="2025-02-13T19:33:37.590793637Z" level=info msg="ImageCreate event name:\"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:37.621372 containerd[1471]: time="2025-02-13T19:33:37.621307985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:37.622530 containerd[1471]: time="2025-02-13T19:33:37.622448816Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.2\" with image id \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76\", size \"20657902\" in 4.17710678s" Feb 13 19:33:37.622530 containerd[1471]: time="2025-02-13T19:33:37.622519402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.2\" returns image reference \"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d\"" Feb 13 19:33:37.623176 containerd[1471]: time="2025-02-13T19:33:37.623091478Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 19:33:39.224902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount798994779.mount: Deactivated successfully. Feb 13 19:33:39.834613 containerd[1471]: time="2025-02-13T19:33:39.834501354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:39.835468 containerd[1471]: time="2025-02-13T19:33:39.835385883Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 19:33:39.836736 containerd[1471]: time="2025-02-13T19:33:39.836685593Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:39.840488 containerd[1471]: time="2025-02-13T19:33:39.840418369Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:39.841220 containerd[1471]: time="2025-02-13T19:33:39.841164027Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.218030528s" Feb 13 19:33:39.841220 containerd[1471]: time="2025-02-13T19:33:39.841206727Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 19:33:39.841935 containerd[1471]: time="2025-02-13T19:33:39.841889890Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Feb 13 19:33:40.725536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1663390486.mount: Deactivated successfully. Feb 13 19:33:42.045922 containerd[1471]: time="2025-02-13T19:33:42.045844787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:42.046838 containerd[1471]: time="2025-02-13T19:33:42.046790933Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Feb 13 19:33:42.048449 containerd[1471]: time="2025-02-13T19:33:42.048406465Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:42.052277 containerd[1471]: time="2025-02-13T19:33:42.052204174Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:42.053764 containerd[1471]: time="2025-02-13T19:33:42.053711110Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.21178131s" Feb 13 19:33:42.053764 containerd[1471]: time="2025-02-13T19:33:42.053753853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Feb 13 19:33:42.054421 containerd[1471]: time="2025-02-13T19:33:42.054399828Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Feb 13 19:33:44.386058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084649081.mount: Deactivated successfully. Feb 13 19:33:44.393093 containerd[1471]: time="2025-02-13T19:33:44.393026354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:44.394139 containerd[1471]: time="2025-02-13T19:33:44.394097501Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Feb 13 19:33:44.395758 containerd[1471]: time="2025-02-13T19:33:44.395723170Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:44.398592 containerd[1471]: time="2025-02-13T19:33:44.398557916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:44.399434 containerd[1471]: time="2025-02-13T19:33:44.399370095Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 2.344940259s" Feb 13 19:33:44.399434 containerd[1471]: time="2025-02-13T19:33:44.399429656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Feb 13 19:33:44.400006 containerd[1471]: time="2025-02-13T19:33:44.399951033Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Feb 13 19:33:44.741418 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:33:44.756329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:44.969041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:44.974652 (kubelet)[2049]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:33:45.102092 kubelet[2049]: E0213 19:33:45.101390 2049 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:33:45.106483 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:33:45.106680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:33:45.273463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108342456.mount: Deactivated successfully. Feb 13 19:33:48.287232 containerd[1471]: time="2025-02-13T19:33:48.287146032Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:48.288068 containerd[1471]: time="2025-02-13T19:33:48.287987937Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Feb 13 19:33:48.289372 containerd[1471]: time="2025-02-13T19:33:48.289316366Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:48.292750 containerd[1471]: time="2025-02-13T19:33:48.292685120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:33:48.294297 containerd[1471]: time="2025-02-13T19:33:48.294262975Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.894112837s" Feb 13 19:33:48.294297 containerd[1471]: time="2025-02-13T19:33:48.294296095Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Feb 13 19:33:50.820346 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:50.832273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:50.862739 systemd[1]: Reloading requested from client PID 2141 ('systemctl') (unit session-9.scope)... Feb 13 19:33:50.862757 systemd[1]: Reloading... Feb 13 19:33:50.973027 zram_generator::config[2183]: No configuration found. Feb 13 19:33:51.580265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:33:51.663073 systemd[1]: Reloading finished in 799 ms. Feb 13 19:33:51.719860 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 19:33:51.720120 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 19:33:51.720391 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:51.723183 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:33:51.898923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:33:51.905381 (kubelet)[2229]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:33:51.958725 kubelet[2229]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:51.958725 kubelet[2229]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:33:51.958725 kubelet[2229]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:33:51.959335 kubelet[2229]: I0213 19:33:51.958792 2229 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:33:52.471764 kubelet[2229]: I0213 19:33:52.471695 2229 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:33:52.471764 kubelet[2229]: I0213 19:33:52.471747 2229 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:33:52.472117 kubelet[2229]: I0213 19:33:52.472082 2229 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:33:52.498131 kubelet[2229]: E0213 19:33:52.498072 2229 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:52.499190 kubelet[2229]: I0213 19:33:52.499136 2229 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:33:52.510265 kubelet[2229]: E0213 19:33:52.510162 2229 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:33:52.510265 kubelet[2229]: I0213 19:33:52.510248 2229 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:33:52.516928 kubelet[2229]: I0213 19:33:52.516856 2229 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:33:52.519713 kubelet[2229]: I0213 19:33:52.519628 2229 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:33:52.519938 kubelet[2229]: I0213 19:33:52.519682 2229 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:33:52.520073 kubelet[2229]: I0213 19:33:52.519948 2229 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:33:52.520073 kubelet[2229]: I0213 19:33:52.520053 2229 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:33:52.520295 kubelet[2229]: I0213 19:33:52.520265 2229 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:52.525552 kubelet[2229]: I0213 19:33:52.525481 2229 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:33:52.525552 kubelet[2229]: I0213 19:33:52.525519 2229 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:33:52.525552 kubelet[2229]: I0213 19:33:52.525557 2229 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:33:52.525734 kubelet[2229]: I0213 19:33:52.525581 2229 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:33:52.530651 kubelet[2229]: I0213 19:33:52.530612 2229 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:33:52.531860 kubelet[2229]: I0213 19:33:52.531211 2229 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:33:52.531860 kubelet[2229]: W0213 19:33:52.531321 2229 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:33:52.533833 kubelet[2229]: W0213 19:33:52.533603 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:52.533833 kubelet[2229]: E0213 19:33:52.533683 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:52.534538 kubelet[2229]: I0213 19:33:52.534371 2229 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:33:52.534538 kubelet[2229]: I0213 19:33:52.534426 2229 server.go:1287] "Started kubelet" Feb 13 19:33:52.535233 kubelet[2229]: I0213 19:33:52.535186 2229 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:33:52.535741 kubelet[2229]: W0213 19:33:52.535281 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:52.535741 kubelet[2229]: E0213 19:33:52.535325 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:52.536168 kubelet[2229]: I0213 19:33:52.536097 2229 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:33:52.536625 kubelet[2229]: I0213 19:33:52.536609 2229 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:33:52.537068 kubelet[2229]: I0213 19:33:52.537044 2229 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:33:52.538818 kubelet[2229]: I0213 19:33:52.538572 2229 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:33:52.538818 kubelet[2229]: I0213 19:33:52.538718 2229 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:33:52.540380 kubelet[2229]: E0213 19:33:52.539035 2229 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:33:52.540380 kubelet[2229]: E0213 19:33:52.539097 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:52.540380 kubelet[2229]: I0213 19:33:52.539126 2229 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:33:52.540380 kubelet[2229]: I0213 19:33:52.539282 2229 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:33:52.540380 kubelet[2229]: I0213 19:33:52.539357 2229 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:33:52.540380 kubelet[2229]: W0213 19:33:52.539657 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:52.540380 kubelet[2229]: E0213 19:33:52.539686 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:52.540380 kubelet[2229]: E0213 19:33:52.539884 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Feb 13 19:33:52.540590 kubelet[2229]: E0213 19:33:52.539088 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db8202cd83c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:33:52.534397889 +0000 UTC m=+0.623528417,LastTimestamp:2025-02-13 19:33:52.534397889 +0000 UTC m=+0.623528417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:33:52.541089 kubelet[2229]: I0213 19:33:52.541040 2229 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:33:52.541379 kubelet[2229]: I0213 19:33:52.541355 2229 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:33:52.542340 kubelet[2229]: I0213 19:33:52.542312 2229 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:33:52.557797 kubelet[2229]: I0213 19:33:52.557759 2229 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:33:52.557797 kubelet[2229]: I0213 19:33:52.557783 2229 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:33:52.557797 kubelet[2229]: I0213 19:33:52.557808 2229 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:33:52.561811 kubelet[2229]: I0213 19:33:52.561748 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:33:52.563536 kubelet[2229]: I0213 19:33:52.563517 2229 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:33:52.563536 kubelet[2229]: I0213 19:33:52.563587 2229 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:33:52.563536 kubelet[2229]: I0213 19:33:52.563623 2229 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:33:52.563536 kubelet[2229]: I0213 19:33:52.563639 2229 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:33:52.563536 kubelet[2229]: E0213 19:33:52.563715 2229 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:33:52.565009 kubelet[2229]: W0213 19:33:52.564982 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:52.565154 kubelet[2229]: E0213 19:33:52.565132 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:52.639594 kubelet[2229]: E0213 19:33:52.639522 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:52.663994 kubelet[2229]: E0213 19:33:52.663899 2229 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:33:52.740457 kubelet[2229]: E0213 19:33:52.740291 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:52.740781 kubelet[2229]: E0213 19:33:52.740751 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Feb 13 19:33:52.841167 kubelet[2229]: E0213 19:33:52.841098 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:52.864230 kubelet[2229]: E0213 19:33:52.864189 2229 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:33:52.942139 kubelet[2229]: E0213 19:33:52.942091 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.043271 kubelet[2229]: E0213 19:33:53.043219 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.142132 kubelet[2229]: E0213 19:33:53.142081 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Feb 13 19:33:53.144211 kubelet[2229]: E0213 19:33:53.144165 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.244643 kubelet[2229]: E0213 19:33:53.244549 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.264818 kubelet[2229]: E0213 19:33:53.264754 2229 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:33:53.345445 kubelet[2229]: E0213 19:33:53.345280 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.445845 kubelet[2229]: E0213 19:33:53.445786 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.466411 kubelet[2229]: E0213 19:33:53.466287 2229 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823db8202cd83c1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:33:52.534397889 +0000 UTC m=+0.623528417,LastTimestamp:2025-02-13 19:33:52.534397889 +0000 UTC m=+0.623528417,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:33:53.508266 kubelet[2229]: W0213 19:33:53.508177 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:53.508266 kubelet[2229]: E0213 19:33:53.508261 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:53.546882 kubelet[2229]: E0213 19:33:53.546791 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.644353 kubelet[2229]: W0213 19:33:53.644173 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:53.644353 kubelet[2229]: E0213 19:33:53.644249 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:53.647676 kubelet[2229]: E0213 19:33:53.647625 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.748451 kubelet[2229]: E0213 19:33:53.748332 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.788279 kubelet[2229]: W0213 19:33:53.788177 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:53.788279 kubelet[2229]: E0213 19:33:53.788263 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:53.811146 kubelet[2229]: W0213 19:33:53.811090 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:53.811146 kubelet[2229]: E0213 19:33:53.811141 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:53.848775 kubelet[2229]: E0213 19:33:53.848733 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:53.943719 kubelet[2229]: E0213 19:33:53.943564 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Feb 13 19:33:53.949678 kubelet[2229]: E0213 19:33:53.949605 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.049993 kubelet[2229]: E0213 19:33:54.049892 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.065169 kubelet[2229]: E0213 19:33:54.065103 2229 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:33:54.150841 kubelet[2229]: E0213 19:33:54.150778 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.251296 kubelet[2229]: E0213 19:33:54.251237 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.351918 kubelet[2229]: E0213 19:33:54.351843 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.452457 kubelet[2229]: E0213 19:33:54.452381 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.493061 kubelet[2229]: I0213 19:33:54.492948 2229 policy_none.go:49] "None policy: Start" Feb 13 19:33:54.493061 kubelet[2229]: I0213 19:33:54.493035 2229 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:33:54.493061 kubelet[2229]: I0213 19:33:54.493057 2229 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:33:54.553161 kubelet[2229]: E0213 19:33:54.553035 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.602505 kubelet[2229]: E0213 19:33:54.602466 2229 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:54.654142 kubelet[2229]: E0213 19:33:54.654082 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.733938 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:33:54.750646 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:33:54.753733 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:33:54.754272 kubelet[2229]: E0213 19:33:54.754240 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:54.763026 kubelet[2229]: I0213 19:33:54.762988 2229 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:33:54.763277 kubelet[2229]: I0213 19:33:54.763261 2229 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:33:54.763333 kubelet[2229]: I0213 19:33:54.763284 2229 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:33:54.763567 kubelet[2229]: I0213 19:33:54.763518 2229 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:33:54.764275 kubelet[2229]: E0213 19:33:54.764251 2229 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:33:54.764342 kubelet[2229]: E0213 19:33:54.764330 2229 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:33:54.864944 kubelet[2229]: I0213 19:33:54.864765 2229 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:54.865193 kubelet[2229]: E0213 19:33:54.865146 2229 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Feb 13 19:33:55.067379 kubelet[2229]: I0213 19:33:55.067321 2229 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:55.067888 kubelet[2229]: E0213 19:33:55.067725 2229 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Feb 13 19:33:55.393156 kubelet[2229]: W0213 19:33:55.393084 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:55.393156 kubelet[2229]: E0213 19:33:55.393140 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:55.468922 kubelet[2229]: I0213 19:33:55.468895 2229 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:55.469280 kubelet[2229]: E0213 19:33:55.469235 2229 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Feb 13 19:33:55.495741 kubelet[2229]: W0213 19:33:55.495693 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:55.495741 kubelet[2229]: E0213 19:33:55.495735 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:55.544387 kubelet[2229]: E0213 19:33:55.544351 2229 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="3.2s" Feb 13 19:33:55.672381 update_engine[1450]: I20250213 19:33:55.672060 1450 update_attempter.cc:509] Updating boot flags... Feb 13 19:33:55.675514 systemd[1]: Created slice kubepods-burstable-podabb8cae9b69c5ef831bae3cd53c12fea.slice - libcontainer container kubepods-burstable-podabb8cae9b69c5ef831bae3cd53c12fea.slice. Feb 13 19:33:55.684369 kubelet[2229]: E0213 19:33:55.684330 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:55.688460 systemd[1]: Created slice kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice - libcontainer container kubepods-burstable-pod95ef9ac46cd4dbaadc63cb713310ae59.slice. Feb 13 19:33:55.691053 kubelet[2229]: E0213 19:33:55.691035 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:55.698513 systemd[1]: Created slice kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice - libcontainer container kubepods-burstable-podc72911152bbceda2f57fd8d59261e015.slice. Feb 13 19:33:55.705997 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2271) Feb 13 19:33:55.747006 kubelet[2229]: E0213 19:33:55.712935 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:55.758757 kubelet[2229]: I0213 19:33:55.758481 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:55.758757 kubelet[2229]: I0213 19:33:55.758524 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:55.758757 kubelet[2229]: I0213 19:33:55.758546 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:55.758757 kubelet[2229]: I0213 19:33:55.758561 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:33:55.758757 kubelet[2229]: I0213 19:33:55.758578 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:55.758941 kubelet[2229]: I0213 19:33:55.758591 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abb8cae9b69c5ef831bae3cd53c12fea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb8cae9b69c5ef831bae3cd53c12fea\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:55.758941 kubelet[2229]: I0213 19:33:55.758606 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abb8cae9b69c5ef831bae3cd53c12fea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abb8cae9b69c5ef831bae3cd53c12fea\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:55.758941 kubelet[2229]: I0213 19:33:55.758619 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:33:55.758941 kubelet[2229]: I0213 19:33:55.758632 2229 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abb8cae9b69c5ef831bae3cd53c12fea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb8cae9b69c5ef831bae3cd53c12fea\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:33:55.792059 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2270) Feb 13 19:33:55.840998 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2270) Feb 13 19:33:55.986045 kubelet[2229]: E0213 19:33:55.985799 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:55.986746 containerd[1471]: time="2025-02-13T19:33:55.986650755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abb8cae9b69c5ef831bae3cd53c12fea,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:55.992030 kubelet[2229]: E0213 19:33:55.991993 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:55.992584 containerd[1471]: time="2025-02-13T19:33:55.992549332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:56.014058 kubelet[2229]: E0213 19:33:56.014006 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:56.014460 containerd[1471]: time="2025-02-13T19:33:56.014424150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,}" Feb 13 19:33:56.148526 kubelet[2229]: W0213 19:33:56.148467 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:56.148526 kubelet[2229]: E0213 19:33:56.148513 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:56.270921 kubelet[2229]: I0213 19:33:56.270861 2229 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:56.271394 kubelet[2229]: E0213 19:33:56.271344 2229 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Feb 13 19:33:56.429561 kubelet[2229]: W0213 19:33:56.429505 2229 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Feb 13 19:33:56.429561 kubelet[2229]: E0213 19:33:56.429555 2229 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Feb 13 19:33:56.455904 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1164752008.mount: Deactivated successfully. Feb 13 19:33:56.464172 containerd[1471]: time="2025-02-13T19:33:56.464098141Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:56.467077 containerd[1471]: time="2025-02-13T19:33:56.467024434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:33:56.468235 containerd[1471]: time="2025-02-13T19:33:56.468164815Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:56.470065 containerd[1471]: time="2025-02-13T19:33:56.470012692Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:56.471227 containerd[1471]: time="2025-02-13T19:33:56.471174109Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:33:56.472210 containerd[1471]: time="2025-02-13T19:33:56.472166758Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:56.473245 containerd[1471]: time="2025-02-13T19:33:56.473148803Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:33:56.474396 containerd[1471]: time="2025-02-13T19:33:56.474351951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:33:56.475334 containerd[1471]: time="2025-02-13T19:33:56.475288888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 488.49612ms" Feb 13 19:33:56.478187 containerd[1471]: time="2025-02-13T19:33:56.478140758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 485.466029ms" Feb 13 19:33:56.480590 containerd[1471]: time="2025-02-13T19:33:56.480541453Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 466.053844ms" Feb 13 19:33:56.709595 containerd[1471]: time="2025-02-13T19:33:56.705934984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:56.709595 containerd[1471]: time="2025-02-13T19:33:56.709020845Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:56.709595 containerd[1471]: time="2025-02-13T19:33:56.709057946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:56.709595 containerd[1471]: time="2025-02-13T19:33:56.709310488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:56.732329 containerd[1471]: time="2025-02-13T19:33:56.732083236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:56.732329 containerd[1471]: time="2025-02-13T19:33:56.732279706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:56.732329 containerd[1471]: time="2025-02-13T19:33:56.732299799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:56.732744 containerd[1471]: time="2025-02-13T19:33:56.732552992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:56.743213 systemd[1]: Started cri-containerd-cdcecee28f8376dc9fb1543f673fb58584f2fc3902a23b71278ff4f7805ec785.scope - libcontainer container cdcecee28f8376dc9fb1543f673fb58584f2fc3902a23b71278ff4f7805ec785. Feb 13 19:33:56.744069 containerd[1471]: time="2025-02-13T19:33:56.743707068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:33:56.744273 containerd[1471]: time="2025-02-13T19:33:56.744228628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:33:56.745997 containerd[1471]: time="2025-02-13T19:33:56.745167008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:56.745997 containerd[1471]: time="2025-02-13T19:33:56.745289536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:33:56.771127 systemd[1]: Started cri-containerd-79c8dcdfe0375d00516765ee2f3901466557c13caa4f8543f5b1f7d4ccfd88dd.scope - libcontainer container 79c8dcdfe0375d00516765ee2f3901466557c13caa4f8543f5b1f7d4ccfd88dd. Feb 13 19:33:56.776430 systemd[1]: Started cri-containerd-816213d14dee07c9a2dd4fa39c4ff50c63c0bb9dde2ce308b3c99b7f33202256.scope - libcontainer container 816213d14dee07c9a2dd4fa39c4ff50c63c0bb9dde2ce308b3c99b7f33202256. Feb 13 19:33:56.838392 containerd[1471]: time="2025-02-13T19:33:56.838112602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:c72911152bbceda2f57fd8d59261e015,Namespace:kube-system,Attempt:0,} returns sandbox id \"cdcecee28f8376dc9fb1543f673fb58584f2fc3902a23b71278ff4f7805ec785\"" Feb 13 19:33:56.842334 kubelet[2229]: E0213 19:33:56.842093 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:56.844911 containerd[1471]: time="2025-02-13T19:33:56.844755994Z" level=info msg="CreateContainer within sandbox \"cdcecee28f8376dc9fb1543f673fb58584f2fc3902a23b71278ff4f7805ec785\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:33:56.847700 containerd[1471]: time="2025-02-13T19:33:56.847652602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:95ef9ac46cd4dbaadc63cb713310ae59,Namespace:kube-system,Attempt:0,} returns sandbox id \"79c8dcdfe0375d00516765ee2f3901466557c13caa4f8543f5b1f7d4ccfd88dd\"" Feb 13 19:33:56.849649 kubelet[2229]: E0213 19:33:56.849615 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:56.853444 containerd[1471]: time="2025-02-13T19:33:56.853394335Z" level=info msg="CreateContainer within sandbox \"79c8dcdfe0375d00516765ee2f3901466557c13caa4f8543f5b1f7d4ccfd88dd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:33:56.857693 containerd[1471]: time="2025-02-13T19:33:56.857653239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:abb8cae9b69c5ef831bae3cd53c12fea,Namespace:kube-system,Attempt:0,} returns sandbox id \"816213d14dee07c9a2dd4fa39c4ff50c63c0bb9dde2ce308b3c99b7f33202256\"" Feb 13 19:33:56.858629 kubelet[2229]: E0213 19:33:56.858535 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:56.860800 containerd[1471]: time="2025-02-13T19:33:56.860730641Z" level=info msg="CreateContainer within sandbox \"816213d14dee07c9a2dd4fa39c4ff50c63c0bb9dde2ce308b3c99b7f33202256\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:33:57.022615 containerd[1471]: time="2025-02-13T19:33:57.022552320Z" level=info msg="CreateContainer within sandbox \"cdcecee28f8376dc9fb1543f673fb58584f2fc3902a23b71278ff4f7805ec785\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51afb6989a3988e5b4d4382f23185ef3447eafd20536ad5027037c75e301ff19\"" Feb 13 19:33:57.023286 containerd[1471]: time="2025-02-13T19:33:57.023260774Z" level=info msg="StartContainer for \"51afb6989a3988e5b4d4382f23185ef3447eafd20536ad5027037c75e301ff19\"" Feb 13 19:33:57.056138 systemd[1]: Started cri-containerd-51afb6989a3988e5b4d4382f23185ef3447eafd20536ad5027037c75e301ff19.scope - libcontainer container 51afb6989a3988e5b4d4382f23185ef3447eafd20536ad5027037c75e301ff19. Feb 13 19:33:57.190833 containerd[1471]: time="2025-02-13T19:33:57.190756439Z" level=info msg="StartContainer for \"51afb6989a3988e5b4d4382f23185ef3447eafd20536ad5027037c75e301ff19\" returns successfully" Feb 13 19:33:57.211832 containerd[1471]: time="2025-02-13T19:33:57.211761931Z" level=info msg="CreateContainer within sandbox \"816213d14dee07c9a2dd4fa39c4ff50c63c0bb9dde2ce308b3c99b7f33202256\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"57db943eaa5a0a47f6a810186b20c3924c454ea60739af34ca7d35f12dcb8497\"" Feb 13 19:33:57.212530 containerd[1471]: time="2025-02-13T19:33:57.212453318Z" level=info msg="StartContainer for \"57db943eaa5a0a47f6a810186b20c3924c454ea60739af34ca7d35f12dcb8497\"" Feb 13 19:33:57.217006 containerd[1471]: time="2025-02-13T19:33:57.216914318Z" level=info msg="CreateContainer within sandbox \"79c8dcdfe0375d00516765ee2f3901466557c13caa4f8543f5b1f7d4ccfd88dd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7eb6d9cbb7dccf63516dc1d723cd480ed5cf987ade10468850bb8eeca3d2ed27\"" Feb 13 19:33:57.217912 containerd[1471]: time="2025-02-13T19:33:57.217845343Z" level=info msg="StartContainer for \"7eb6d9cbb7dccf63516dc1d723cd480ed5cf987ade10468850bb8eeca3d2ed27\"" Feb 13 19:33:57.259623 systemd[1]: Started cri-containerd-57db943eaa5a0a47f6a810186b20c3924c454ea60739af34ca7d35f12dcb8497.scope - libcontainer container 57db943eaa5a0a47f6a810186b20c3924c454ea60739af34ca7d35f12dcb8497. Feb 13 19:33:57.279164 systemd[1]: Started cri-containerd-7eb6d9cbb7dccf63516dc1d723cd480ed5cf987ade10468850bb8eeca3d2ed27.scope - libcontainer container 7eb6d9cbb7dccf63516dc1d723cd480ed5cf987ade10468850bb8eeca3d2ed27. Feb 13 19:33:57.367390 containerd[1471]: time="2025-02-13T19:33:57.367329259Z" level=info msg="StartContainer for \"7eb6d9cbb7dccf63516dc1d723cd480ed5cf987ade10468850bb8eeca3d2ed27\" returns successfully" Feb 13 19:33:57.367578 containerd[1471]: time="2025-02-13T19:33:57.367440790Z" level=info msg="StartContainer for \"57db943eaa5a0a47f6a810186b20c3924c454ea60739af34ca7d35f12dcb8497\" returns successfully" Feb 13 19:33:57.590376 kubelet[2229]: E0213 19:33:57.590244 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:57.590376 kubelet[2229]: E0213 19:33:57.590361 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:57.590878 kubelet[2229]: E0213 19:33:57.590704 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:57.590878 kubelet[2229]: E0213 19:33:57.590812 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:57.592978 kubelet[2229]: E0213 19:33:57.592187 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:57.592978 kubelet[2229]: E0213 19:33:57.592268 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:57.876852 kubelet[2229]: I0213 19:33:57.872998 2229 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:33:58.595463 kubelet[2229]: E0213 19:33:58.595424 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:58.595936 kubelet[2229]: E0213 19:33:58.595597 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:58.595936 kubelet[2229]: E0213 19:33:58.595702 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:58.595936 kubelet[2229]: E0213 19:33:58.595892 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:58.910851 kubelet[2229]: E0213 19:33:58.910695 2229 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:33:59.190309 kubelet[2229]: I0213 19:33:59.190124 2229 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:33:59.190309 kubelet[2229]: E0213 19:33:59.190170 2229 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Feb 13 19:33:59.205349 kubelet[2229]: E0213 19:33:59.205300 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:59.306374 kubelet[2229]: E0213 19:33:59.306319 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:59.407175 kubelet[2229]: E0213 19:33:59.407120 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:59.508126 kubelet[2229]: E0213 19:33:59.508046 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:59.596684 kubelet[2229]: E0213 19:33:59.596648 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:33:59.597212 kubelet[2229]: E0213 19:33:59.596768 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:33:59.608936 kubelet[2229]: E0213 19:33:59.608856 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:59.709619 kubelet[2229]: E0213 19:33:59.709551 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:59.810603 kubelet[2229]: E0213 19:33:59.810411 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:33:59.911103 kubelet[2229]: E0213 19:33:59.911044 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.012093 kubelet[2229]: E0213 19:34:00.012031 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.113142 kubelet[2229]: E0213 19:34:00.113013 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.213584 kubelet[2229]: E0213 19:34:00.213530 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.314332 kubelet[2229]: E0213 19:34:00.314243 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.415085 kubelet[2229]: E0213 19:34:00.414909 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.515832 kubelet[2229]: E0213 19:34:00.515769 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.613040 kubelet[2229]: E0213 19:34:00.613005 2229 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Feb 13 19:34:00.613504 kubelet[2229]: E0213 19:34:00.613143 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:00.616145 kubelet[2229]: E0213 19:34:00.616124 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.717026 kubelet[2229]: E0213 19:34:00.716835 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.817338 kubelet[2229]: E0213 19:34:00.817262 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:00.918235 kubelet[2229]: E0213 19:34:00.918177 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:01.018611 kubelet[2229]: E0213 19:34:01.018546 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:01.119764 kubelet[2229]: E0213 19:34:01.119674 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:01.220836 kubelet[2229]: E0213 19:34:01.220761 2229 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:01.340984 kubelet[2229]: I0213 19:34:01.340829 2229 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:01.457380 kubelet[2229]: I0213 19:34:01.457315 2229 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:01.530053 kubelet[2229]: I0213 19:34:01.529996 2229 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:01.535384 kubelet[2229]: I0213 19:34:01.534321 2229 apiserver.go:52] "Watching apiserver" Feb 13 19:34:01.536546 kubelet[2229]: E0213 19:34:01.536489 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:01.539966 kubelet[2229]: I0213 19:34:01.539931 2229 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:34:01.583778 kubelet[2229]: E0213 19:34:01.583729 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:01.599251 kubelet[2229]: E0213 19:34:01.599110 2229 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:01.895410 systemd[1]: Reloading requested from client PID 2523 ('systemctl') (unit session-9.scope)... Feb 13 19:34:01.895430 systemd[1]: Reloading... Feb 13 19:34:01.990005 zram_generator::config[2566]: No configuration found. Feb 13 19:34:02.183415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:34:02.284349 systemd[1]: Reloading finished in 388 ms. Feb 13 19:34:02.336839 kubelet[2229]: I0213 19:34:02.336728 2229 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:02.336811 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:02.354830 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:34:02.355201 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:02.355266 systemd[1]: kubelet.service: Consumed 1.223s CPU time, 127.3M memory peak, 0B memory swap peak. Feb 13 19:34:02.363296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:34:02.525912 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:34:02.533155 (kubelet)[2607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:34:02.587269 kubelet[2607]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:02.587269 kubelet[2607]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 19:34:02.587269 kubelet[2607]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:34:02.587269 kubelet[2607]: I0213 19:34:02.587183 2607 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:34:02.598695 kubelet[2607]: I0213 19:34:02.598601 2607 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 19:34:02.598695 kubelet[2607]: I0213 19:34:02.598636 2607 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:34:02.599602 kubelet[2607]: I0213 19:34:02.599088 2607 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 19:34:02.601349 kubelet[2607]: I0213 19:34:02.601313 2607 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:34:02.603927 kubelet[2607]: I0213 19:34:02.603879 2607 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:34:02.607302 kubelet[2607]: E0213 19:34:02.607268 2607 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:34:02.607302 kubelet[2607]: I0213 19:34:02.607301 2607 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:34:02.612670 kubelet[2607]: I0213 19:34:02.612631 2607 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:34:02.612930 kubelet[2607]: I0213 19:34:02.612885 2607 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:34:02.613107 kubelet[2607]: I0213 19:34:02.612927 2607 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:34:02.613107 kubelet[2607]: I0213 19:34:02.613104 2607 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:34:02.613250 kubelet[2607]: I0213 19:34:02.613115 2607 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 19:34:02.613250 kubelet[2607]: I0213 19:34:02.613153 2607 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:02.613349 kubelet[2607]: I0213 19:34:02.613334 2607 kubelet.go:446] "Attempting to sync node with API server" Feb 13 19:34:02.613349 kubelet[2607]: I0213 19:34:02.613349 2607 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:34:02.613435 kubelet[2607]: I0213 19:34:02.613365 2607 kubelet.go:352] "Adding apiserver pod source" Feb 13 19:34:02.613435 kubelet[2607]: I0213 19:34:02.613376 2607 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:34:02.618288 kubelet[2607]: I0213 19:34:02.617341 2607 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:34:02.618288 kubelet[2607]: I0213 19:34:02.618023 2607 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:34:02.619047 kubelet[2607]: I0213 19:34:02.618724 2607 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 19:34:02.619047 kubelet[2607]: I0213 19:34:02.618774 2607 server.go:1287] "Started kubelet" Feb 13 19:34:02.621634 kubelet[2607]: I0213 19:34:02.621586 2607 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:34:02.622842 kubelet[2607]: I0213 19:34:02.622788 2607 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:34:02.623432 kubelet[2607]: I0213 19:34:02.623407 2607 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:34:02.626032 kubelet[2607]: I0213 19:34:02.624436 2607 server.go:490] "Adding debug handlers to kubelet server" Feb 13 19:34:02.626503 kubelet[2607]: I0213 19:34:02.626294 2607 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:34:02.629419 kubelet[2607]: I0213 19:34:02.627438 2607 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:34:02.629419 kubelet[2607]: I0213 19:34:02.629320 2607 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 19:34:02.630146 kubelet[2607]: E0213 19:34:02.630111 2607 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:34:02.632079 kubelet[2607]: I0213 19:34:02.630585 2607 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:34:02.632079 kubelet[2607]: E0213 19:34:02.630668 2607 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:34:02.632079 kubelet[2607]: I0213 19:34:02.631053 2607 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:34:02.632079 kubelet[2607]: I0213 19:34:02.631616 2607 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:34:02.632079 kubelet[2607]: I0213 19:34:02.631721 2607 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:34:02.633771 kubelet[2607]: I0213 19:34:02.633742 2607 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:34:02.643310 kubelet[2607]: I0213 19:34:02.643243 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:34:02.645352 kubelet[2607]: I0213 19:34:02.644889 2607 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:34:02.645352 kubelet[2607]: I0213 19:34:02.644919 2607 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 19:34:02.645352 kubelet[2607]: I0213 19:34:02.644936 2607 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 19:34:02.645352 kubelet[2607]: I0213 19:34:02.644944 2607 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 19:34:02.645352 kubelet[2607]: E0213 19:34:02.645002 2607 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:34:02.681168 kubelet[2607]: I0213 19:34:02.681117 2607 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 19:34:02.681168 kubelet[2607]: I0213 19:34:02.681149 2607 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 19:34:02.681168 kubelet[2607]: I0213 19:34:02.681176 2607 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:34:02.681396 kubelet[2607]: I0213 19:34:02.681375 2607 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:34:02.681430 kubelet[2607]: I0213 19:34:02.681393 2607 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:34:02.681430 kubelet[2607]: I0213 19:34:02.681419 2607 policy_none.go:49] "None policy: Start" Feb 13 19:34:02.681430 kubelet[2607]: I0213 19:34:02.681430 2607 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 19:34:02.681500 kubelet[2607]: I0213 19:34:02.681444 2607 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:34:02.681590 kubelet[2607]: I0213 19:34:02.681574 2607 state_mem.go:75] "Updated machine memory state" Feb 13 19:34:02.686409 kubelet[2607]: I0213 19:34:02.686371 2607 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:34:02.686589 kubelet[2607]: I0213 19:34:02.686565 2607 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:34:02.686631 kubelet[2607]: I0213 19:34:02.686589 2607 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:34:02.687126 kubelet[2607]: I0213 19:34:02.686880 2607 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:34:02.688092 kubelet[2607]: E0213 19:34:02.688056 2607 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 19:34:02.746794 kubelet[2607]: I0213 19:34:02.746717 2607 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:02.746794 kubelet[2607]: I0213 19:34:02.746745 2607 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:02.746794 kubelet[2607]: I0213 19:34:02.746779 2607 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.755402 kubelet[2607]: E0213 19:34:02.755353 2607 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:02.755603 kubelet[2607]: E0213 19:34:02.755553 2607 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.755603 kubelet[2607]: E0213 19:34:02.755562 2607 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:02.792254 kubelet[2607]: I0213 19:34:02.792118 2607 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Feb 13 19:34:02.800228 kubelet[2607]: I0213 19:34:02.800194 2607 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Feb 13 19:34:02.800362 kubelet[2607]: I0213 19:34:02.800282 2607 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Feb 13 19:34:02.832326 kubelet[2607]: I0213 19:34:02.832248 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/95ef9ac46cd4dbaadc63cb713310ae59-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"95ef9ac46cd4dbaadc63cb713310ae59\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:34:02.832326 kubelet[2607]: I0213 19:34:02.832305 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.832326 kubelet[2607]: I0213 19:34:02.832332 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.832550 kubelet[2607]: I0213 19:34:02.832353 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.832550 kubelet[2607]: I0213 19:34:02.832379 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.832550 kubelet[2607]: I0213 19:34:02.832472 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c72911152bbceda2f57fd8d59261e015-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"c72911152bbceda2f57fd8d59261e015\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:34:02.832550 kubelet[2607]: I0213 19:34:02.832523 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/abb8cae9b69c5ef831bae3cd53c12fea-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb8cae9b69c5ef831bae3cd53c12fea\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:02.832550 kubelet[2607]: I0213 19:34:02.832548 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/abb8cae9b69c5ef831bae3cd53c12fea-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"abb8cae9b69c5ef831bae3cd53c12fea\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:02.832685 kubelet[2607]: I0213 19:34:02.832563 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/abb8cae9b69c5ef831bae3cd53c12fea-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"abb8cae9b69c5ef831bae3cd53c12fea\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:03.056461 kubelet[2607]: E0213 19:34:03.056235 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.056461 kubelet[2607]: E0213 19:34:03.056235 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.056461 kubelet[2607]: E0213 19:34:03.056377 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.614747 kubelet[2607]: I0213 19:34:03.614687 2607 apiserver.go:52] "Watching apiserver" Feb 13 19:34:03.630991 kubelet[2607]: I0213 19:34:03.630930 2607 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:34:03.718697 kubelet[2607]: I0213 19:34:03.718655 2607 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:03.718883 kubelet[2607]: E0213 19:34:03.718772 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.719655 kubelet[2607]: E0213 19:34:03.719087 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:03.948611 kubelet[2607]: E0213 19:34:03.948469 2607 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:34:03.948739 kubelet[2607]: E0213 19:34:03.948633 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:04.195489 kubelet[2607]: I0213 19:34:04.195397 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.195336084 podStartE2EDuration="3.195336084s" podCreationTimestamp="2025-02-13 19:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:03.948088408 +0000 UTC m=+1.406906319" watchObservedRunningTime="2025-02-13 19:34:04.195336084 +0000 UTC m=+1.654153985" Feb 13 19:34:04.253782 kubelet[2607]: I0213 19:34:04.253466 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.253437175 podStartE2EDuration="3.253437175s" podCreationTimestamp="2025-02-13 19:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:04.25271596 +0000 UTC m=+1.711533871" watchObservedRunningTime="2025-02-13 19:34:04.253437175 +0000 UTC m=+1.712255096" Feb 13 19:34:04.253782 kubelet[2607]: I0213 19:34:04.253653 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.2536428490000002 podStartE2EDuration="3.253642849s" podCreationTimestamp="2025-02-13 19:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:04.195673939 +0000 UTC m=+1.654491840" watchObservedRunningTime="2025-02-13 19:34:04.253642849 +0000 UTC m=+1.712460780" Feb 13 19:34:04.720234 kubelet[2607]: E0213 19:34:04.720109 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:04.720234 kubelet[2607]: E0213 19:34:04.720214 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:05.724676 kubelet[2607]: E0213 19:34:05.724630 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:06.298169 kubelet[2607]: E0213 19:34:06.298131 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:07.125365 kubelet[2607]: I0213 19:34:07.125328 2607 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:34:07.125844 kubelet[2607]: I0213 19:34:07.125826 2607 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:34:07.125874 containerd[1471]: time="2025-02-13T19:34:07.125648783Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:34:08.172522 systemd[1]: Created slice kubepods-besteffort-pod3d67220b_9106_45f4_83ca_d9168aab2eaa.slice - libcontainer container kubepods-besteffort-pod3d67220b_9106_45f4_83ca_d9168aab2eaa.slice. Feb 13 19:34:08.265128 kubelet[2607]: I0213 19:34:08.265072 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6rptk\" (UniqueName: \"kubernetes.io/projected/3d67220b-9106-45f4-83ca-d9168aab2eaa-kube-api-access-6rptk\") pod \"kube-proxy-rlm2w\" (UID: \"3d67220b-9106-45f4-83ca-d9168aab2eaa\") " pod="kube-system/kube-proxy-rlm2w" Feb 13 19:34:08.265128 kubelet[2607]: I0213 19:34:08.265119 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d67220b-9106-45f4-83ca-d9168aab2eaa-xtables-lock\") pod \"kube-proxy-rlm2w\" (UID: \"3d67220b-9106-45f4-83ca-d9168aab2eaa\") " pod="kube-system/kube-proxy-rlm2w" Feb 13 19:34:08.265608 kubelet[2607]: I0213 19:34:08.265157 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d67220b-9106-45f4-83ca-d9168aab2eaa-lib-modules\") pod \"kube-proxy-rlm2w\" (UID: \"3d67220b-9106-45f4-83ca-d9168aab2eaa\") " pod="kube-system/kube-proxy-rlm2w" Feb 13 19:34:08.265608 kubelet[2607]: I0213 19:34:08.265175 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3d67220b-9106-45f4-83ca-d9168aab2eaa-kube-proxy\") pod \"kube-proxy-rlm2w\" (UID: \"3d67220b-9106-45f4-83ca-d9168aab2eaa\") " pod="kube-system/kube-proxy-rlm2w" Feb 13 19:34:08.323368 systemd[1]: Created slice kubepods-besteffort-pod858ba0bb_7876_460f_9b00_8115c82c7e34.slice - libcontainer container kubepods-besteffort-pod858ba0bb_7876_460f_9b00_8115c82c7e34.slice. Feb 13 19:34:08.465983 kubelet[2607]: I0213 19:34:08.465830 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhqb8\" (UniqueName: \"kubernetes.io/projected/858ba0bb-7876-460f-9b00-8115c82c7e34-kube-api-access-jhqb8\") pod \"tigera-operator-7d68577dc5-h27vj\" (UID: \"858ba0bb-7876-460f-9b00-8115c82c7e34\") " pod="tigera-operator/tigera-operator-7d68577dc5-h27vj" Feb 13 19:34:08.465983 kubelet[2607]: I0213 19:34:08.465873 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/858ba0bb-7876-460f-9b00-8115c82c7e34-var-lib-calico\") pod \"tigera-operator-7d68577dc5-h27vj\" (UID: \"858ba0bb-7876-460f-9b00-8115c82c7e34\") " pod="tigera-operator/tigera-operator-7d68577dc5-h27vj" Feb 13 19:34:08.489154 kubelet[2607]: E0213 19:34:08.489102 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.489830 containerd[1471]: time="2025-02-13T19:34:08.489780667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rlm2w,Uid:3d67220b-9106-45f4-83ca-d9168aab2eaa,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:08.626598 containerd[1471]: time="2025-02-13T19:34:08.626537798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-h27vj,Uid:858ba0bb-7876-460f-9b00-8115c82c7e34,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:34:08.706066 containerd[1471]: time="2025-02-13T19:34:08.705853550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:08.706989 containerd[1471]: time="2025-02-13T19:34:08.706920323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:08.707165 containerd[1471]: time="2025-02-13T19:34:08.706985394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:08.707165 containerd[1471]: time="2025-02-13T19:34:08.707114304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:08.730827 systemd[1]: Started cri-containerd-09c9c19f5fd9d76b9ac50ee886fcf3a3d3b694288075da9c9d3fbbee2627b483.scope - libcontainer container 09c9c19f5fd9d76b9ac50ee886fcf3a3d3b694288075da9c9d3fbbee2627b483. Feb 13 19:34:08.758476 containerd[1471]: time="2025-02-13T19:34:08.758328048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:08.758476 containerd[1471]: time="2025-02-13T19:34:08.758392789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:08.758476 containerd[1471]: time="2025-02-13T19:34:08.758406316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:08.758725 containerd[1471]: time="2025-02-13T19:34:08.758501188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:08.759079 containerd[1471]: time="2025-02-13T19:34:08.759038151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rlm2w,Uid:3d67220b-9106-45f4-83ca-d9168aab2eaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"09c9c19f5fd9d76b9ac50ee886fcf3a3d3b694288075da9c9d3fbbee2627b483\"" Feb 13 19:34:08.760145 kubelet[2607]: E0213 19:34:08.760110 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:08.764063 containerd[1471]: time="2025-02-13T19:34:08.763855522Z" level=info msg="CreateContainer within sandbox \"09c9c19f5fd9d76b9ac50ee886fcf3a3d3b694288075da9c9d3fbbee2627b483\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:34:08.789267 systemd[1]: Started cri-containerd-2a46965076fd2d0e859cd0cb12071cdef7a65710380b93cb7cc496bc36be73d7.scope - libcontainer container 2a46965076fd2d0e859cd0cb12071cdef7a65710380b93cb7cc496bc36be73d7. Feb 13 19:34:08.789475 containerd[1471]: time="2025-02-13T19:34:08.789373826Z" level=info msg="CreateContainer within sandbox \"09c9c19f5fd9d76b9ac50ee886fcf3a3d3b694288075da9c9d3fbbee2627b483\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"165842221f29b16bd10bc3719f26081e9473b3f719d1da30b7a6a1a02e51a231\"" Feb 13 19:34:08.790247 containerd[1471]: time="2025-02-13T19:34:08.790203820Z" level=info msg="StartContainer for \"165842221f29b16bd10bc3719f26081e9473b3f719d1da30b7a6a1a02e51a231\"" Feb 13 19:34:08.824205 systemd[1]: Started cri-containerd-165842221f29b16bd10bc3719f26081e9473b3f719d1da30b7a6a1a02e51a231.scope - libcontainer container 165842221f29b16bd10bc3719f26081e9473b3f719d1da30b7a6a1a02e51a231. Feb 13 19:34:08.835647 containerd[1471]: time="2025-02-13T19:34:08.835575275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-h27vj,Uid:858ba0bb-7876-460f-9b00-8115c82c7e34,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2a46965076fd2d0e859cd0cb12071cdef7a65710380b93cb7cc496bc36be73d7\"" Feb 13 19:34:08.837266 containerd[1471]: time="2025-02-13T19:34:08.837234133Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:34:08.861352 containerd[1471]: time="2025-02-13T19:34:08.861304664Z" level=info msg="StartContainer for \"165842221f29b16bd10bc3719f26081e9473b3f719d1da30b7a6a1a02e51a231\" returns successfully" Feb 13 19:34:09.094557 sudo[1661]: pam_unix(sudo:session): session closed for user root Feb 13 19:34:09.096157 sshd[1660]: Connection closed by 10.0.0.1 port 41786 Feb 13 19:34:09.096804 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:09.101071 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:41786.service: Deactivated successfully. Feb 13 19:34:09.103440 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:34:09.103684 systemd[1]: session-9.scope: Consumed 5.437s CPU time, 153.1M memory peak, 0B memory swap peak. Feb 13 19:34:09.104307 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:34:09.105403 systemd-logind[1449]: Removed session 9. Feb 13 19:34:09.777288 kubelet[2607]: E0213 19:34:09.777218 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:09.941091 kubelet[2607]: I0213 19:34:09.940982 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rlm2w" podStartSLOduration=1.940937934 podStartE2EDuration="1.940937934s" podCreationTimestamp="2025-02-13 19:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:09.940828074 +0000 UTC m=+7.399645975" watchObservedRunningTime="2025-02-13 19:34:09.940937934 +0000 UTC m=+7.399755835" Feb 13 19:34:11.624802 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264519578.mount: Deactivated successfully. Feb 13 19:34:12.128445 containerd[1471]: time="2025-02-13T19:34:12.128369819Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:12.129183 containerd[1471]: time="2025-02-13T19:34:12.129089488Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:34:12.130621 containerd[1471]: time="2025-02-13T19:34:12.130575749Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:12.133340 containerd[1471]: time="2025-02-13T19:34:12.133295890Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:12.134127 containerd[1471]: time="2025-02-13T19:34:12.134088092Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.296827156s" Feb 13 19:34:12.134162 containerd[1471]: time="2025-02-13T19:34:12.134128824Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:34:12.136876 containerd[1471]: time="2025-02-13T19:34:12.136835146Z" level=info msg="CreateContainer within sandbox \"2a46965076fd2d0e859cd0cb12071cdef7a65710380b93cb7cc496bc36be73d7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:34:12.154093 containerd[1471]: time="2025-02-13T19:34:12.154037071Z" level=info msg="CreateContainer within sandbox \"2a46965076fd2d0e859cd0cb12071cdef7a65710380b93cb7cc496bc36be73d7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e139b823452267d3b743b2bdb1ff4d57983e835ecc2345037661f4c1f7c55c62\"" Feb 13 19:34:12.154599 containerd[1471]: time="2025-02-13T19:34:12.154571081Z" level=info msg="StartContainer for \"e139b823452267d3b743b2bdb1ff4d57983e835ecc2345037661f4c1f7c55c62\"" Feb 13 19:34:12.196143 systemd[1]: Started cri-containerd-e139b823452267d3b743b2bdb1ff4d57983e835ecc2345037661f4c1f7c55c62.scope - libcontainer container e139b823452267d3b743b2bdb1ff4d57983e835ecc2345037661f4c1f7c55c62. Feb 13 19:34:12.519104 containerd[1471]: time="2025-02-13T19:34:12.518941959Z" level=info msg="StartContainer for \"e139b823452267d3b743b2bdb1ff4d57983e835ecc2345037661f4c1f7c55c62\" returns successfully" Feb 13 19:34:13.094711 kubelet[2607]: E0213 19:34:13.094667 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:13.106465 kubelet[2607]: I0213 19:34:13.106395 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-h27vj" podStartSLOduration=1.808082163 podStartE2EDuration="5.106373974s" podCreationTimestamp="2025-02-13 19:34:08 +0000 UTC" firstStartedPulling="2025-02-13 19:34:08.83675214 +0000 UTC m=+6.295570041" lastFinishedPulling="2025-02-13 19:34:12.135043951 +0000 UTC m=+9.593861852" observedRunningTime="2025-02-13 19:34:12.797046868 +0000 UTC m=+10.255864770" watchObservedRunningTime="2025-02-13 19:34:13.106373974 +0000 UTC m=+10.565191875" Feb 13 19:34:15.613920 kubelet[2607]: E0213 19:34:15.613696 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:15.790252 kubelet[2607]: E0213 19:34:15.790140 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:15.822522 systemd[1]: Created slice kubepods-besteffort-pod998978e6_d611_4482_aac6_d21bc341d902.slice - libcontainer container kubepods-besteffort-pod998978e6_d611_4482_aac6_d21bc341d902.slice. Feb 13 19:34:15.832778 systemd[1]: Created slice kubepods-besteffort-pod7d2639d3_502e_4925_8386_1a6f7ae6587c.slice - libcontainer container kubepods-besteffort-pod7d2639d3_502e_4925_8386_1a6f7ae6587c.slice. Feb 13 19:34:15.903308 kubelet[2607]: E0213 19:34:15.903130 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:15.915083 kubelet[2607]: I0213 19:34:15.913612 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed-socket-dir\") pod \"csi-node-driver-qqgd5\" (UID: \"f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed\") " pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:15.915083 kubelet[2607]: I0213 19:34:15.913666 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvkg5\" (UniqueName: \"kubernetes.io/projected/f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed-kube-api-access-zvkg5\") pod \"csi-node-driver-qqgd5\" (UID: \"f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed\") " pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:15.915083 kubelet[2607]: I0213 19:34:15.913687 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/998978e6-d611-4482-aac6-d21bc341d902-tigera-ca-bundle\") pod \"calico-typha-5fb98674cf-mhqn4\" (UID: \"998978e6-d611-4482-aac6-d21bc341d902\") " pod="calico-system/calico-typha-5fb98674cf-mhqn4" Feb 13 19:34:15.915083 kubelet[2607]: I0213 19:34:15.913704 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-xtables-lock\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915083 kubelet[2607]: I0213 19:34:15.913722 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-cni-bin-dir\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915362 kubelet[2607]: I0213 19:34:15.913742 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-cni-net-dir\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915362 kubelet[2607]: I0213 19:34:15.913762 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-flexvol-driver-host\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915362 kubelet[2607]: I0213 19:34:15.913782 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed-kubelet-dir\") pod \"csi-node-driver-qqgd5\" (UID: \"f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed\") " pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:15.915362 kubelet[2607]: I0213 19:34:15.913813 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/998978e6-d611-4482-aac6-d21bc341d902-typha-certs\") pod \"calico-typha-5fb98674cf-mhqn4\" (UID: \"998978e6-d611-4482-aac6-d21bc341d902\") " pod="calico-system/calico-typha-5fb98674cf-mhqn4" Feb 13 19:34:15.915362 kubelet[2607]: I0213 19:34:15.913835 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-var-run-calico\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915529 kubelet[2607]: I0213 19:34:15.913852 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7d2639d3-502e-4925-8386-1a6f7ae6587c-node-certs\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915529 kubelet[2607]: I0213 19:34:15.913869 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-lib-modules\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915529 kubelet[2607]: I0213 19:34:15.913898 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nkr6\" (UniqueName: \"kubernetes.io/projected/998978e6-d611-4482-aac6-d21bc341d902-kube-api-access-6nkr6\") pod \"calico-typha-5fb98674cf-mhqn4\" (UID: \"998978e6-d611-4482-aac6-d21bc341d902\") " pod="calico-system/calico-typha-5fb98674cf-mhqn4" Feb 13 19:34:15.915529 kubelet[2607]: I0213 19:34:15.913913 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-policysync\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915529 kubelet[2607]: I0213 19:34:15.913927 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-var-lib-calico\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915662 kubelet[2607]: I0213 19:34:15.913940 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed-varrun\") pod \"csi-node-driver-qqgd5\" (UID: \"f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed\") " pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:15.915662 kubelet[2607]: I0213 19:34:15.913953 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed-registration-dir\") pod \"csi-node-driver-qqgd5\" (UID: \"f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed\") " pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:15.915662 kubelet[2607]: I0213 19:34:15.914003 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7d2639d3-502e-4925-8386-1a6f7ae6587c-cni-log-dir\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915662 kubelet[2607]: I0213 19:34:15.914031 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd92t\" (UniqueName: \"kubernetes.io/projected/7d2639d3-502e-4925-8386-1a6f7ae6587c-kube-api-access-pd92t\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:15.915662 kubelet[2607]: I0213 19:34:15.914047 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d2639d3-502e-4925-8386-1a6f7ae6587c-tigera-ca-bundle\") pod \"calico-node-97z4f\" (UID: \"7d2639d3-502e-4925-8386-1a6f7ae6587c\") " pod="calico-system/calico-node-97z4f" Feb 13 19:34:16.016976 kubelet[2607]: E0213 19:34:16.016819 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.016976 kubelet[2607]: W0213 19:34:16.016846 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.016976 kubelet[2607]: E0213 19:34:16.016872 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.017371 kubelet[2607]: E0213 19:34:16.017357 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.017371 kubelet[2607]: W0213 19:34:16.017369 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.017689 kubelet[2607]: E0213 19:34:16.017665 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.017913 kubelet[2607]: E0213 19:34:16.017843 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.017913 kubelet[2607]: W0213 19:34:16.017858 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.018105 kubelet[2607]: E0213 19:34:16.017877 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.018382 kubelet[2607]: E0213 19:34:16.018370 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.018462 kubelet[2607]: W0213 19:34:16.018450 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.018580 kubelet[2607]: E0213 19:34:16.018539 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.018899 kubelet[2607]: E0213 19:34:16.018862 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.018899 kubelet[2607]: W0213 19:34:16.018873 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.019106 kubelet[2607]: E0213 19:34:16.019076 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.019467 kubelet[2607]: E0213 19:34:16.019391 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.019467 kubelet[2607]: W0213 19:34:16.019426 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.019596 kubelet[2607]: E0213 19:34:16.019575 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.019875 kubelet[2607]: E0213 19:34:16.019843 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.020041 kubelet[2607]: W0213 19:34:16.019933 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.020126 kubelet[2607]: E0213 19:34:16.020097 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.020612 kubelet[2607]: E0213 19:34:16.020482 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.020612 kubelet[2607]: W0213 19:34:16.020498 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.020612 kubelet[2607]: E0213 19:34:16.020572 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.020982 kubelet[2607]: E0213 19:34:16.020942 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.020982 kubelet[2607]: W0213 19:34:16.020976 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.021090 kubelet[2607]: E0213 19:34:16.021082 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.021263 kubelet[2607]: E0213 19:34:16.021246 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.021263 kubelet[2607]: W0213 19:34:16.021259 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.021435 kubelet[2607]: E0213 19:34:16.021407 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.021521 kubelet[2607]: E0213 19:34:16.021510 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.021521 kubelet[2607]: W0213 19:34:16.021518 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.021603 kubelet[2607]: E0213 19:34:16.021576 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.021799 kubelet[2607]: E0213 19:34:16.021774 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.021799 kubelet[2607]: W0213 19:34:16.021784 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.021935 kubelet[2607]: E0213 19:34:16.021920 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.022125 kubelet[2607]: E0213 19:34:16.022097 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.022194 kubelet[2607]: W0213 19:34:16.022125 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.022545 kubelet[2607]: E0213 19:34:16.022246 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.022545 kubelet[2607]: E0213 19:34:16.022373 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.022545 kubelet[2607]: W0213 19:34:16.022385 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.022668 kubelet[2607]: E0213 19:34:16.022602 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.022668 kubelet[2607]: W0213 19:34:16.022612 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.022831 kubelet[2607]: E0213 19:34:16.022699 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.022831 kubelet[2607]: E0213 19:34:16.022717 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.022883 kubelet[2607]: E0213 19:34:16.022855 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.022883 kubelet[2607]: W0213 19:34:16.022865 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.023070 kubelet[2607]: E0213 19:34:16.023055 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.023168 kubelet[2607]: E0213 19:34:16.023152 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.023214 kubelet[2607]: W0213 19:34:16.023167 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.023655 kubelet[2607]: E0213 19:34:16.023365 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.023655 kubelet[2607]: W0213 19:34:16.023378 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.023655 kubelet[2607]: E0213 19:34:16.023571 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.023655 kubelet[2607]: W0213 19:34:16.023579 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.025399 kubelet[2607]: E0213 19:34:16.025365 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.025512 kubelet[2607]: W0213 19:34:16.025381 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.026038 kubelet[2607]: E0213 19:34:16.026016 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.026088 kubelet[2607]: W0213 19:34:16.026055 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.026713 kubelet[2607]: E0213 19:34:16.026686 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.026713 kubelet[2607]: E0213 19:34:16.026708 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.026788 kubelet[2607]: E0213 19:34:16.026746 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.026788 kubelet[2607]: E0213 19:34:16.026755 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.027790 kubelet[2607]: E0213 19:34:16.026930 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.027790 kubelet[2607]: E0213 19:34:16.027037 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.027790 kubelet[2607]: W0213 19:34:16.027075 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.027790 kubelet[2607]: E0213 19:34:16.027204 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.029594 kubelet[2607]: E0213 19:34:16.029537 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.029594 kubelet[2607]: W0213 19:34:16.029553 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.029688 kubelet[2607]: E0213 19:34:16.029635 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.030158 kubelet[2607]: E0213 19:34:16.030137 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.030158 kubelet[2607]: W0213 19:34:16.030153 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.030254 kubelet[2607]: E0213 19:34:16.030247 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.031086 kubelet[2607]: E0213 19:34:16.030380 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.031086 kubelet[2607]: W0213 19:34:16.030398 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.031086 kubelet[2607]: E0213 19:34:16.030483 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.031086 kubelet[2607]: E0213 19:34:16.030691 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.031086 kubelet[2607]: W0213 19:34:16.030699 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.031086 kubelet[2607]: E0213 19:34:16.030805 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.031086 kubelet[2607]: E0213 19:34:16.031037 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.031086 kubelet[2607]: W0213 19:34:16.031045 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.031400 kubelet[2607]: E0213 19:34:16.031129 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.031400 kubelet[2607]: E0213 19:34:16.031298 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.031400 kubelet[2607]: W0213 19:34:16.031306 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.031400 kubelet[2607]: E0213 19:34:16.031397 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.031562 kubelet[2607]: E0213 19:34:16.031515 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.031562 kubelet[2607]: W0213 19:34:16.031522 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.031625 kubelet[2607]: E0213 19:34:16.031595 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.033059 kubelet[2607]: E0213 19:34:16.031745 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.033059 kubelet[2607]: W0213 19:34:16.031757 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.033059 kubelet[2607]: E0213 19:34:16.031814 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.033059 kubelet[2607]: E0213 19:34:16.032005 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.033059 kubelet[2607]: W0213 19:34:16.032013 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.033059 kubelet[2607]: E0213 19:34:16.032102 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.033059 kubelet[2607]: E0213 19:34:16.032226 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.033059 kubelet[2607]: W0213 19:34:16.032233 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.033059 kubelet[2607]: E0213 19:34:16.032308 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.033059 kubelet[2607]: E0213 19:34:16.032450 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.033313 kubelet[2607]: W0213 19:34:16.032457 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.033313 kubelet[2607]: E0213 19:34:16.032465 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.039164 kubelet[2607]: E0213 19:34:16.039119 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.039242 kubelet[2607]: W0213 19:34:16.039160 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.039242 kubelet[2607]: E0213 19:34:16.039196 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.128389 kubelet[2607]: E0213 19:34:16.128318 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.129098 containerd[1471]: time="2025-02-13T19:34:16.128941058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb98674cf-mhqn4,Uid:998978e6-d611-4482-aac6-d21bc341d902,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:16.135608 kubelet[2607]: E0213 19:34:16.135574 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.136135 containerd[1471]: time="2025-02-13T19:34:16.136079519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-97z4f,Uid:7d2639d3-502e-4925-8386-1a6f7ae6587c,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:16.165470 containerd[1471]: time="2025-02-13T19:34:16.164942133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:16.165470 containerd[1471]: time="2025-02-13T19:34:16.165043764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:16.165470 containerd[1471]: time="2025-02-13T19:34:16.165084775Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:16.167039 containerd[1471]: time="2025-02-13T19:34:16.166755367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:16.173353 containerd[1471]: time="2025-02-13T19:34:16.172390887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:16.173353 containerd[1471]: time="2025-02-13T19:34:16.173135379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:16.173353 containerd[1471]: time="2025-02-13T19:34:16.173149286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:16.173353 containerd[1471]: time="2025-02-13T19:34:16.173267179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:16.195151 systemd[1]: Started cri-containerd-0c53a8a5bb17f1cb8c52747d710ad052d0588b4248cffbd7acd617605357e8b3.scope - libcontainer container 0c53a8a5bb17f1cb8c52747d710ad052d0588b4248cffbd7acd617605357e8b3. Feb 13 19:34:16.198639 systemd[1]: Started cri-containerd-8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f.scope - libcontainer container 8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f. Feb 13 19:34:16.224725 containerd[1471]: time="2025-02-13T19:34:16.224675290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-97z4f,Uid:7d2639d3-502e-4925-8386-1a6f7ae6587c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f\"" Feb 13 19:34:16.225690 kubelet[2607]: E0213 19:34:16.225664 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.226902 containerd[1471]: time="2025-02-13T19:34:16.226788236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:34:16.241458 containerd[1471]: time="2025-02-13T19:34:16.241410350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fb98674cf-mhqn4,Uid:998978e6-d611-4482-aac6-d21bc341d902,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c53a8a5bb17f1cb8c52747d710ad052d0588b4248cffbd7acd617605357e8b3\"" Feb 13 19:34:16.242114 kubelet[2607]: E0213 19:34:16.242092 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.301892 kubelet[2607]: E0213 19:34:16.301859 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:16.316232 kubelet[2607]: E0213 19:34:16.316188 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.316232 kubelet[2607]: W0213 19:34:16.316213 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.316337 kubelet[2607]: E0213 19:34:16.316238 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.316563 kubelet[2607]: E0213 19:34:16.316528 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.316563 kubelet[2607]: W0213 19:34:16.316547 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.316643 kubelet[2607]: E0213 19:34:16.316572 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.316850 kubelet[2607]: E0213 19:34:16.316812 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.316850 kubelet[2607]: W0213 19:34:16.316833 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.316850 kubelet[2607]: E0213 19:34:16.316843 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.317150 kubelet[2607]: E0213 19:34:16.317130 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.317150 kubelet[2607]: W0213 19:34:16.317142 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.317240 kubelet[2607]: E0213 19:34:16.317153 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.317418 kubelet[2607]: E0213 19:34:16.317398 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.317418 kubelet[2607]: W0213 19:34:16.317413 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.317503 kubelet[2607]: E0213 19:34:16.317424 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.317665 kubelet[2607]: E0213 19:34:16.317650 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.317665 kubelet[2607]: W0213 19:34:16.317662 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.317766 kubelet[2607]: E0213 19:34:16.317672 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.318017 kubelet[2607]: E0213 19:34:16.317942 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.318017 kubelet[2607]: W0213 19:34:16.317968 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.318017 kubelet[2607]: E0213 19:34:16.317982 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.318557 kubelet[2607]: E0213 19:34:16.318339 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.318557 kubelet[2607]: W0213 19:34:16.318354 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.318557 kubelet[2607]: E0213 19:34:16.318365 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.318945 kubelet[2607]: E0213 19:34:16.318682 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.318945 kubelet[2607]: W0213 19:34:16.318693 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.318945 kubelet[2607]: E0213 19:34:16.318704 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.319383 kubelet[2607]: E0213 19:34:16.318993 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.319383 kubelet[2607]: W0213 19:34:16.319003 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.319383 kubelet[2607]: E0213 19:34:16.319014 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.319383 kubelet[2607]: E0213 19:34:16.319273 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.319383 kubelet[2607]: W0213 19:34:16.319283 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.319383 kubelet[2607]: E0213 19:34:16.319294 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.320005 kubelet[2607]: E0213 19:34:16.319537 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.320005 kubelet[2607]: W0213 19:34:16.319555 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.320005 kubelet[2607]: E0213 19:34:16.319566 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.320005 kubelet[2607]: E0213 19:34:16.319840 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.320005 kubelet[2607]: W0213 19:34:16.319873 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.320005 kubelet[2607]: E0213 19:34:16.319895 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.320288 kubelet[2607]: E0213 19:34:16.320246 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.320398 kubelet[2607]: W0213 19:34:16.320257 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.320398 kubelet[2607]: E0213 19:34:16.320394 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.320696 kubelet[2607]: E0213 19:34:16.320667 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.320696 kubelet[2607]: W0213 19:34:16.320686 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.320696 kubelet[2607]: E0213 19:34:16.320703 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.321509 kubelet[2607]: E0213 19:34:16.321007 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.321509 kubelet[2607]: W0213 19:34:16.321030 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.321509 kubelet[2607]: E0213 19:34:16.321045 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.321509 kubelet[2607]: E0213 19:34:16.321300 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.321509 kubelet[2607]: W0213 19:34:16.321323 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.321509 kubelet[2607]: E0213 19:34:16.321335 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.321740 kubelet[2607]: E0213 19:34:16.321558 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.321740 kubelet[2607]: W0213 19:34:16.321570 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.321740 kubelet[2607]: E0213 19:34:16.321582 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.321888 kubelet[2607]: E0213 19:34:16.321872 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.321888 kubelet[2607]: W0213 19:34:16.321885 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.322046 kubelet[2607]: E0213 19:34:16.321898 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.322162 kubelet[2607]: E0213 19:34:16.322109 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.322162 kubelet[2607]: W0213 19:34:16.322157 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.322246 kubelet[2607]: E0213 19:34:16.322167 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.322381 kubelet[2607]: E0213 19:34:16.322368 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.322381 kubelet[2607]: W0213 19:34:16.322377 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.322482 kubelet[2607]: E0213 19:34:16.322385 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.322607 kubelet[2607]: E0213 19:34:16.322594 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.322607 kubelet[2607]: W0213 19:34:16.322604 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.322699 kubelet[2607]: E0213 19:34:16.322613 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.322865 kubelet[2607]: E0213 19:34:16.322850 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.322865 kubelet[2607]: W0213 19:34:16.322862 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.322980 kubelet[2607]: E0213 19:34:16.322870 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.323100 kubelet[2607]: E0213 19:34:16.323086 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.323100 kubelet[2607]: W0213 19:34:16.323095 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.323100 kubelet[2607]: E0213 19:34:16.323102 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:16.323332 kubelet[2607]: E0213 19:34:16.323320 2607 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:34:16.323332 kubelet[2607]: W0213 19:34:16.323329 2607 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:34:16.323409 kubelet[2607]: E0213 19:34:16.323338 2607 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:34:17.645819 kubelet[2607]: E0213 19:34:17.645754 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:17.945066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2754902829.mount: Deactivated successfully. Feb 13 19:34:18.033064 containerd[1471]: time="2025-02-13T19:34:18.033005569Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:18.034183 containerd[1471]: time="2025-02-13T19:34:18.034102901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:34:18.035558 containerd[1471]: time="2025-02-13T19:34:18.035524023Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:18.038032 containerd[1471]: time="2025-02-13T19:34:18.037996457Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:18.038769 containerd[1471]: time="2025-02-13T19:34:18.038735894Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.811911918s" Feb 13 19:34:18.038818 containerd[1471]: time="2025-02-13T19:34:18.038771194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:34:18.040063 containerd[1471]: time="2025-02-13T19:34:18.039872145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:34:18.041250 containerd[1471]: time="2025-02-13T19:34:18.041212977Z" level=info msg="CreateContainer within sandbox \"8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:34:18.230117 containerd[1471]: time="2025-02-13T19:34:18.229322947Z" level=info msg="CreateContainer within sandbox \"8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097\"" Feb 13 19:34:18.230252 containerd[1471]: time="2025-02-13T19:34:18.230191528Z" level=info msg="StartContainer for \"a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097\"" Feb 13 19:34:18.277262 systemd[1]: Started cri-containerd-a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097.scope - libcontainer container a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097. Feb 13 19:34:18.334865 systemd[1]: cri-containerd-a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097.scope: Deactivated successfully. Feb 13 19:34:18.356786 containerd[1471]: time="2025-02-13T19:34:18.356712220Z" level=info msg="StartContainer for \"a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097\" returns successfully" Feb 13 19:34:18.381626 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097-rootfs.mount: Deactivated successfully. Feb 13 19:34:18.401230 containerd[1471]: time="2025-02-13T19:34:18.401155578Z" level=info msg="shim disconnected" id=a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097 namespace=k8s.io Feb 13 19:34:18.401230 containerd[1471]: time="2025-02-13T19:34:18.401222389Z" level=warning msg="cleaning up after shim disconnected" id=a0ecd62659cf848fb70320a7c1bbe87800a6ab1d9298d8ae8598198260443097 namespace=k8s.io Feb 13 19:34:18.401230 containerd[1471]: time="2025-02-13T19:34:18.401230285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:18.836992 kubelet[2607]: E0213 19:34:18.836942 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:19.646151 kubelet[2607]: E0213 19:34:19.646086 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:20.985997 containerd[1471]: time="2025-02-13T19:34:20.985932624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:20.986791 containerd[1471]: time="2025-02-13T19:34:20.986753438Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 19:34:20.988062 containerd[1471]: time="2025-02-13T19:34:20.988035869Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:20.990379 containerd[1471]: time="2025-02-13T19:34:20.990328527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:20.990899 containerd[1471]: time="2025-02-13T19:34:20.990878558Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.950962197s" Feb 13 19:34:20.990937 containerd[1471]: time="2025-02-13T19:34:20.990904870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:34:20.992275 containerd[1471]: time="2025-02-13T19:34:20.992111823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:34:21.003017 containerd[1471]: time="2025-02-13T19:34:21.002974064Z" level=info msg="CreateContainer within sandbox \"0c53a8a5bb17f1cb8c52747d710ad052d0588b4248cffbd7acd617605357e8b3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:34:21.021257 containerd[1471]: time="2025-02-13T19:34:21.021206735Z" level=info msg="CreateContainer within sandbox \"0c53a8a5bb17f1cb8c52747d710ad052d0588b4248cffbd7acd617605357e8b3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0858e840aaee4d907b518f4da02eb604b9ba343ad0cfae2ee59f40eafaf8c647\"" Feb 13 19:34:21.022231 containerd[1471]: time="2025-02-13T19:34:21.022197040Z" level=info msg="StartContainer for \"0858e840aaee4d907b518f4da02eb604b9ba343ad0cfae2ee59f40eafaf8c647\"" Feb 13 19:34:21.055167 systemd[1]: Started cri-containerd-0858e840aaee4d907b518f4da02eb604b9ba343ad0cfae2ee59f40eafaf8c647.scope - libcontainer container 0858e840aaee4d907b518f4da02eb604b9ba343ad0cfae2ee59f40eafaf8c647. Feb 13 19:34:21.122586 containerd[1471]: time="2025-02-13T19:34:21.122527108Z" level=info msg="StartContainer for \"0858e840aaee4d907b518f4da02eb604b9ba343ad0cfae2ee59f40eafaf8c647\" returns successfully" Feb 13 19:34:21.645274 kubelet[2607]: E0213 19:34:21.645222 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:21.843814 kubelet[2607]: E0213 19:34:21.843739 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:22.104239 kubelet[2607]: I0213 19:34:22.104165 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fb98674cf-mhqn4" podStartSLOduration=2.254500993 podStartE2EDuration="7.003589861s" podCreationTimestamp="2025-02-13 19:34:15 +0000 UTC" firstStartedPulling="2025-02-13 19:34:16.242685591 +0000 UTC m=+13.701503492" lastFinishedPulling="2025-02-13 19:34:20.991774459 +0000 UTC m=+18.450592360" observedRunningTime="2025-02-13 19:34:22.003525365 +0000 UTC m=+19.462343277" watchObservedRunningTime="2025-02-13 19:34:22.003589861 +0000 UTC m=+19.462407762" Feb 13 19:34:22.913390 kubelet[2607]: I0213 19:34:22.913327 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:22.913921 kubelet[2607]: E0213 19:34:22.913842 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:23.645799 kubelet[2607]: E0213 19:34:23.645725 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:25.646031 kubelet[2607]: E0213 19:34:25.645973 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:26.685373 containerd[1471]: time="2025-02-13T19:34:26.685287259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.686399 containerd[1471]: time="2025-02-13T19:34:26.686347830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:34:26.687854 containerd[1471]: time="2025-02-13T19:34:26.687818931Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.690919 containerd[1471]: time="2025-02-13T19:34:26.690824830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:26.691639 containerd[1471]: time="2025-02-13T19:34:26.691585295Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.699443523s" Feb 13 19:34:26.691639 containerd[1471]: time="2025-02-13T19:34:26.691635814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:34:26.694243 containerd[1471]: time="2025-02-13T19:34:26.694199619Z" level=info msg="CreateContainer within sandbox \"8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:34:26.711733 containerd[1471]: time="2025-02-13T19:34:26.711691150Z" level=info msg="CreateContainer within sandbox \"8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c\"" Feb 13 19:34:26.712471 containerd[1471]: time="2025-02-13T19:34:26.712295279Z" level=info msg="StartContainer for \"b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c\"" Feb 13 19:34:26.756183 systemd[1]: Started cri-containerd-b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c.scope - libcontainer container b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c. Feb 13 19:34:26.931769 containerd[1471]: time="2025-02-13T19:34:26.931701568Z" level=info msg="StartContainer for \"b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c\" returns successfully" Feb 13 19:34:27.848618 kubelet[2607]: E0213 19:34:27.848545 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:27.943435 kubelet[2607]: E0213 19:34:27.943390 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:28.944876 kubelet[2607]: E0213 19:34:28.944831 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:29.116426 kubelet[2607]: E0213 19:34:29.116345 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:29.559512 systemd[1]: cri-containerd-b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c.scope: Deactivated successfully. Feb 13 19:34:29.580341 kubelet[2607]: I0213 19:34:29.580291 2607 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 19:34:29.588389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c-rootfs.mount: Deactivated successfully. Feb 13 19:34:29.667617 systemd[1]: Created slice kubepods-besteffort-pod1aaf1ef7_2705_4815_a824_0f60456d76fc.slice - libcontainer container kubepods-besteffort-pod1aaf1ef7_2705_4815_a824_0f60456d76fc.slice. Feb 13 19:34:29.676487 systemd[1]: Created slice kubepods-besteffort-pod30182d20_c572_4a40_ab8d_be90016a3c84.slice - libcontainer container kubepods-besteffort-pod30182d20_c572_4a40_ab8d_be90016a3c84.slice. Feb 13 19:34:29.681326 systemd[1]: Created slice kubepods-burstable-pod0d54ca70_3d73_40a5_9a0e_85776bf4fb5e.slice - libcontainer container kubepods-burstable-pod0d54ca70_3d73_40a5_9a0e_85776bf4fb5e.slice. Feb 13 19:34:29.686637 systemd[1]: Created slice kubepods-besteffort-pod54b6b75e_6c3e_4c6f_9680_59e42f2a9685.slice - libcontainer container kubepods-besteffort-pod54b6b75e_6c3e_4c6f_9680_59e42f2a9685.slice. Feb 13 19:34:29.697130 systemd[1]: Created slice kubepods-burstable-podf4864561_a999_4e48_83d2_08fa358e2d4a.slice - libcontainer container kubepods-burstable-podf4864561_a999_4e48_83d2_08fa358e2d4a.slice. Feb 13 19:34:29.707323 kubelet[2607]: I0213 19:34:29.707232 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/30182d20-c572-4a40-ab8d-be90016a3c84-tigera-ca-bundle\") pod \"calico-kube-controllers-6f695fb64c-rb8fz\" (UID: \"30182d20-c572-4a40-ab8d-be90016a3c84\") " pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:29.707323 kubelet[2607]: I0213 19:34:29.707308 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4sjh\" (UniqueName: \"kubernetes.io/projected/30182d20-c572-4a40-ab8d-be90016a3c84-kube-api-access-w4sjh\") pod \"calico-kube-controllers-6f695fb64c-rb8fz\" (UID: \"30182d20-c572-4a40-ab8d-be90016a3c84\") " pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:29.707599 kubelet[2607]: I0213 19:34:29.707341 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58b2x\" (UniqueName: \"kubernetes.io/projected/1aaf1ef7-2705-4815-a824-0f60456d76fc-kube-api-access-58b2x\") pod \"calico-apiserver-5875c56fd9-b2rnd\" (UID: \"1aaf1ef7-2705-4815-a824-0f60456d76fc\") " pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:29.707599 kubelet[2607]: I0213 19:34:29.707390 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0d54ca70-3d73-40a5-9a0e-85776bf4fb5e-config-volume\") pod \"coredns-668d6bf9bc-szjm2\" (UID: \"0d54ca70-3d73-40a5-9a0e-85776bf4fb5e\") " pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:29.707599 kubelet[2607]: I0213 19:34:29.707463 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f4864561-a999-4e48-83d2-08fa358e2d4a-config-volume\") pod \"coredns-668d6bf9bc-jsldt\" (UID: \"f4864561-a999-4e48-83d2-08fa358e2d4a\") " pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:29.707599 kubelet[2607]: I0213 19:34:29.707504 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/54b6b75e-6c3e-4c6f-9680-59e42f2a9685-calico-apiserver-certs\") pod \"calico-apiserver-5875c56fd9-cljgm\" (UID: \"54b6b75e-6c3e-4c6f-9680-59e42f2a9685\") " pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:29.707599 kubelet[2607]: I0213 19:34:29.707534 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fxw8\" (UniqueName: \"kubernetes.io/projected/54b6b75e-6c3e-4c6f-9680-59e42f2a9685-kube-api-access-6fxw8\") pod \"calico-apiserver-5875c56fd9-cljgm\" (UID: \"54b6b75e-6c3e-4c6f-9680-59e42f2a9685\") " pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:29.707724 kubelet[2607]: I0213 19:34:29.707586 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9ds5\" (UniqueName: \"kubernetes.io/projected/0d54ca70-3d73-40a5-9a0e-85776bf4fb5e-kube-api-access-m9ds5\") pod \"coredns-668d6bf9bc-szjm2\" (UID: \"0d54ca70-3d73-40a5-9a0e-85776bf4fb5e\") " pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:29.707724 kubelet[2607]: I0213 19:34:29.707630 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1aaf1ef7-2705-4815-a824-0f60456d76fc-calico-apiserver-certs\") pod \"calico-apiserver-5875c56fd9-b2rnd\" (UID: \"1aaf1ef7-2705-4815-a824-0f60456d76fc\") " pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:29.707724 kubelet[2607]: I0213 19:34:29.707656 2607 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rc2\" (UniqueName: \"kubernetes.io/projected/f4864561-a999-4e48-83d2-08fa358e2d4a-kube-api-access-s4rc2\") pod \"coredns-668d6bf9bc-jsldt\" (UID: \"f4864561-a999-4e48-83d2-08fa358e2d4a\") " pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:29.835041 containerd[1471]: time="2025-02-13T19:34:29.834876608Z" level=info msg="shim disconnected" id=b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c namespace=k8s.io Feb 13 19:34:29.835041 containerd[1471]: time="2025-02-13T19:34:29.834945642Z" level=warning msg="cleaning up after shim disconnected" id=b919f699ab76bf3a0cd8870ec46b3c803ddf5ac1d3d5ae8addddb2f61e18674c namespace=k8s.io Feb 13 19:34:29.835041 containerd[1471]: time="2025-02-13T19:34:29.834984016Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:34:29.949293 kubelet[2607]: E0213 19:34:29.949250 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:29.950008 containerd[1471]: time="2025-02-13T19:34:29.949940637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:34:29.974772 containerd[1471]: time="2025-02-13T19:34:29.974717486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:34:29.981857 containerd[1471]: time="2025-02-13T19:34:29.981798188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:29.983991 kubelet[2607]: E0213 19:34:29.983945 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:29.984985 containerd[1471]: time="2025-02-13T19:34:29.984597777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:29.995216 containerd[1471]: time="2025-02-13T19:34:29.995046386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:34:29.999687 kubelet[2607]: E0213 19:34:29.999596 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:30.000226 containerd[1471]: time="2025-02-13T19:34:30.000100203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:0,}" Feb 13 19:34:30.140087 containerd[1471]: time="2025-02-13T19:34:30.138750378Z" level=error msg="Failed to destroy network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.140087 containerd[1471]: time="2025-02-13T19:34:30.139781193Z" level=error msg="encountered an error cleaning up failed sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.140087 containerd[1471]: time="2025-02-13T19:34:30.139935753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.140500 kubelet[2607]: E0213 19:34:30.140445 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.140664 kubelet[2607]: E0213 19:34:30.140644 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:30.141296 containerd[1471]: time="2025-02-13T19:34:30.141258706Z" level=error msg="Failed to destroy network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.141480 kubelet[2607]: E0213 19:34:30.141450 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:30.141682 containerd[1471]: time="2025-02-13T19:34:30.141653715Z" level=error msg="encountered an error cleaning up failed sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.141746 containerd[1471]: time="2025-02-13T19:34:30.141712860Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.142742 kubelet[2607]: E0213 19:34:30.141949 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.142742 kubelet[2607]: E0213 19:34:30.142040 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:30.142742 kubelet[2607]: E0213 19:34:30.142112 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:30.142848 kubelet[2607]: E0213 19:34:30.142206 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" podUID="54b6b75e-6c3e-4c6f-9680-59e42f2a9685" Feb 13 19:34:30.143020 kubelet[2607]: E0213 19:34:30.141587 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" podUID="1aaf1ef7-2705-4815-a824-0f60456d76fc" Feb 13 19:34:30.154728 containerd[1471]: time="2025-02-13T19:34:30.154678874Z" level=error msg="Failed to destroy network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155087 containerd[1471]: time="2025-02-13T19:34:30.155031930Z" level=error msg="Failed to destroy network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155332 containerd[1471]: time="2025-02-13T19:34:30.155301595Z" level=error msg="encountered an error cleaning up failed sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155399 containerd[1471]: time="2025-02-13T19:34:30.155368896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155665 containerd[1471]: time="2025-02-13T19:34:30.155484561Z" level=error msg="encountered an error cleaning up failed sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155665 containerd[1471]: time="2025-02-13T19:34:30.155562502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155791 kubelet[2607]: E0213 19:34:30.155685 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155791 kubelet[2607]: E0213 19:34:30.155768 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.155901 kubelet[2607]: E0213 19:34:30.155831 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:30.155901 kubelet[2607]: E0213 19:34:30.155849 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:30.155901 kubelet[2607]: E0213 19:34:30.155859 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:30.156048 kubelet[2607]: E0213 19:34:30.155900 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:30.156048 kubelet[2607]: E0213 19:34:30.155929 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jsldt" podUID="f4864561-a999-4e48-83d2-08fa358e2d4a" Feb 13 19:34:30.156176 containerd[1471]: time="2025-02-13T19:34:30.155941358Z" level=error msg="Failed to destroy network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.156333 kubelet[2607]: E0213 19:34:30.156256 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szjm2" podUID="0d54ca70-3d73-40a5-9a0e-85776bf4fb5e" Feb 13 19:34:30.156510 containerd[1471]: time="2025-02-13T19:34:30.156473894Z" level=error msg="encountered an error cleaning up failed sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.156606 containerd[1471]: time="2025-02-13T19:34:30.156554100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.156812 kubelet[2607]: E0213 19:34:30.156769 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:30.156899 kubelet[2607]: E0213 19:34:30.156816 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:30.156899 kubelet[2607]: E0213 19:34:30.156833 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:30.156899 kubelet[2607]: E0213 19:34:30.156870 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" podUID="30182d20-c572-4a40-ab8d-be90016a3c84" Feb 13 19:34:30.653686 systemd[1]: Created slice kubepods-besteffort-podf2dd1bcc_42fa_437c_a3ad_b39a937ac4ed.slice - libcontainer container kubepods-besteffort-podf2dd1bcc_42fa_437c_a3ad_b39a937ac4ed.slice. Feb 13 19:34:30.656738 containerd[1471]: time="2025-02-13T19:34:30.656684612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:0,}" Feb 13 19:34:30.952446 kubelet[2607]: I0213 19:34:30.952311 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e" Feb 13 19:34:30.953282 containerd[1471]: time="2025-02-13T19:34:30.953081095Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:34:30.953594 containerd[1471]: time="2025-02-13T19:34:30.953341501Z" level=info msg="Ensure that sandbox f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e in task-service has been cleanup successfully" Feb 13 19:34:30.953638 containerd[1471]: time="2025-02-13T19:34:30.953594273Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:34:30.953638 containerd[1471]: time="2025-02-13T19:34:30.953607950Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:34:30.954797 kubelet[2607]: E0213 19:34:30.954030 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:30.954928 containerd[1471]: time="2025-02-13T19:34:30.954403607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:1,}" Feb 13 19:34:30.954973 kubelet[2607]: I0213 19:34:30.954893 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f" Feb 13 19:34:30.955354 containerd[1471]: time="2025-02-13T19:34:30.955321171Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:34:30.955566 containerd[1471]: time="2025-02-13T19:34:30.955546018Z" level=info msg="Ensure that sandbox edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f in task-service has been cleanup successfully" Feb 13 19:34:30.955750 containerd[1471]: time="2025-02-13T19:34:30.955730938Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:34:30.955750 containerd[1471]: time="2025-02-13T19:34:30.955746879Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:34:30.956254 systemd[1]: run-netns-cni\x2d5ae4c45d\x2d1464\x2d3073\x2df8b5\x2d15aa0c5fc8a9.mount: Deactivated successfully. Feb 13 19:34:30.956382 containerd[1471]: time="2025-02-13T19:34:30.956245509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:34:30.958040 kubelet[2607]: I0213 19:34:30.958001 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b" Feb 13 19:34:30.958479 containerd[1471]: time="2025-02-13T19:34:30.958454014Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:34:30.958666 containerd[1471]: time="2025-02-13T19:34:30.958635947Z" level=info msg="Ensure that sandbox 3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b in task-service has been cleanup successfully" Feb 13 19:34:30.958753 kubelet[2607]: I0213 19:34:30.958704 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb" Feb 13 19:34:30.958864 systemd[1]: run-netns-cni\x2d3ef7af7c\x2d5f6e\x2dfc18\x2d5940\x2d9ceab6eddd6f.mount: Deactivated successfully. Feb 13 19:34:30.959292 containerd[1471]: time="2025-02-13T19:34:30.959102905Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:34:30.959474 containerd[1471]: time="2025-02-13T19:34:30.959334185Z" level=info msg="Ensure that sandbox 4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb in task-service has been cleanup successfully" Feb 13 19:34:30.960108 containerd[1471]: time="2025-02-13T19:34:30.960073543Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:34:30.960108 containerd[1471]: time="2025-02-13T19:34:30.960092038Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:34:30.960291 kubelet[2607]: I0213 19:34:30.960269 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c" Feb 13 19:34:30.960583 kubelet[2607]: E0213 19:34:30.960555 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:30.961218 containerd[1471]: time="2025-02-13T19:34:30.960763013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:1,}" Feb 13 19:34:30.961218 containerd[1471]: time="2025-02-13T19:34:30.960915921Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:34:30.961218 containerd[1471]: time="2025-02-13T19:34:30.961017519Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:34:30.961218 containerd[1471]: time="2025-02-13T19:34:30.961036986Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:34:30.961218 containerd[1471]: time="2025-02-13T19:34:30.961106221Z" level=info msg="Ensure that sandbox 74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c in task-service has been cleanup successfully" Feb 13 19:34:30.961372 containerd[1471]: time="2025-02-13T19:34:30.961355766Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:34:30.961398 containerd[1471]: time="2025-02-13T19:34:30.961372138Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:34:30.961600 systemd[1]: run-netns-cni\x2d509a1290\x2d4033\x2d0250\x2d2950\x2de2df6ba804dc.mount: Deactivated successfully. Feb 13 19:34:30.961702 systemd[1]: run-netns-cni\x2d26cb6e9a\x2dd3d8\x2d041b\x2d7e0f\x2d20805b99f8aa.mount: Deactivated successfully. Feb 13 19:34:30.962088 containerd[1471]: time="2025-02-13T19:34:30.961999268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:1,}" Feb 13 19:34:30.962139 containerd[1471]: time="2025-02-13T19:34:30.962003395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:34:30.964523 systemd[1]: run-netns-cni\x2ddaf39790\x2d8e23\x2da95c\x2d952c\x2d7ca14c161e98.mount: Deactivated successfully. Feb 13 19:34:31.661831 containerd[1471]: time="2025-02-13T19:34:31.661748781Z" level=error msg="Failed to destroy network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.662350 containerd[1471]: time="2025-02-13T19:34:31.662226839Z" level=error msg="encountered an error cleaning up failed sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.662350 containerd[1471]: time="2025-02-13T19:34:31.662304180Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.662876 kubelet[2607]: E0213 19:34:31.662837 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.662942 kubelet[2607]: E0213 19:34:31.662923 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:31.663074 kubelet[2607]: E0213 19:34:31.662949 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:31.663129 kubelet[2607]: E0213 19:34:31.663100 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:31.712445 containerd[1471]: time="2025-02-13T19:34:31.711555648Z" level=error msg="Failed to destroy network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.712803 containerd[1471]: time="2025-02-13T19:34:31.712756832Z" level=error msg="encountered an error cleaning up failed sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.712887 containerd[1471]: time="2025-02-13T19:34:31.712861265Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.713183 kubelet[2607]: E0213 19:34:31.713140 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.713257 kubelet[2607]: E0213 19:34:31.713208 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:31.713294 kubelet[2607]: E0213 19:34:31.713253 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:31.713358 kubelet[2607]: E0213 19:34:31.713326 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jsldt" podUID="f4864561-a999-4e48-83d2-08fa358e2d4a" Feb 13 19:34:31.718016 containerd[1471]: time="2025-02-13T19:34:31.717945924Z" level=error msg="Failed to destroy network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.718712 containerd[1471]: time="2025-02-13T19:34:31.718560257Z" level=error msg="encountered an error cleaning up failed sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.718971 containerd[1471]: time="2025-02-13T19:34:31.718932890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.719443 kubelet[2607]: E0213 19:34:31.719397 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.719489 kubelet[2607]: E0213 19:34:31.719456 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:31.719489 kubelet[2607]: E0213 19:34:31.719478 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:31.719560 kubelet[2607]: E0213 19:34:31.719528 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szjm2" podUID="0d54ca70-3d73-40a5-9a0e-85776bf4fb5e" Feb 13 19:34:31.726845 containerd[1471]: time="2025-02-13T19:34:31.726789866Z" level=error msg="Failed to destroy network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.727538 containerd[1471]: time="2025-02-13T19:34:31.727307472Z" level=error msg="encountered an error cleaning up failed sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.727538 containerd[1471]: time="2025-02-13T19:34:31.727399129Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.729013 kubelet[2607]: E0213 19:34:31.727802 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.729013 kubelet[2607]: E0213 19:34:31.727898 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:31.729013 kubelet[2607]: E0213 19:34:31.727944 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:31.729144 kubelet[2607]: E0213 19:34:31.728029 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" podUID="30182d20-c572-4a40-ab8d-be90016a3c84" Feb 13 19:34:31.732158 containerd[1471]: time="2025-02-13T19:34:31.732121644Z" level=error msg="Failed to destroy network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.732694 containerd[1471]: time="2025-02-13T19:34:31.732667094Z" level=error msg="encountered an error cleaning up failed sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.732771 containerd[1471]: time="2025-02-13T19:34:31.732732060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.733006 kubelet[2607]: E0213 19:34:31.732940 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.733006 kubelet[2607]: E0213 19:34:31.732988 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:31.733006 kubelet[2607]: E0213 19:34:31.733004 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:31.733125 kubelet[2607]: E0213 19:34:31.733063 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" podUID="54b6b75e-6c3e-4c6f-9680-59e42f2a9685" Feb 13 19:34:31.733523 containerd[1471]: time="2025-02-13T19:34:31.733498489Z" level=error msg="Failed to destroy network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.733846 containerd[1471]: time="2025-02-13T19:34:31.733825625Z" level=error msg="encountered an error cleaning up failed sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.733879 containerd[1471]: time="2025-02-13T19:34:31.733866094Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.734067 kubelet[2607]: E0213 19:34:31.734039 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:31.734067 kubelet[2607]: E0213 19:34:31.734074 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:31.734153 kubelet[2607]: E0213 19:34:31.734088 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:31.734186 kubelet[2607]: E0213 19:34:31.734145 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" podUID="1aaf1ef7-2705-4815-a824-0f60456d76fc" Feb 13 19:34:31.963310 kubelet[2607]: I0213 19:34:31.963169 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2" Feb 13 19:34:31.964534 containerd[1471]: time="2025-02-13T19:34:31.964054093Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:34:31.964534 containerd[1471]: time="2025-02-13T19:34:31.964384274Z" level=info msg="Ensure that sandbox 57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2 in task-service has been cleanup successfully" Feb 13 19:34:31.965159 containerd[1471]: time="2025-02-13T19:34:31.964630874Z" level=info msg="TearDown network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" successfully" Feb 13 19:34:31.965159 containerd[1471]: time="2025-02-13T19:34:31.964644200Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" returns successfully" Feb 13 19:34:31.965233 containerd[1471]: time="2025-02-13T19:34:31.965200119Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:34:31.965412 containerd[1471]: time="2025-02-13T19:34:31.965334710Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:34:31.965412 containerd[1471]: time="2025-02-13T19:34:31.965380419Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:34:31.965491 kubelet[2607]: I0213 19:34:31.965382 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7" Feb 13 19:34:31.965794 kubelet[2607]: E0213 19:34:31.965779 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:31.965834 containerd[1471]: time="2025-02-13T19:34:31.965811176Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:34:31.966000 containerd[1471]: time="2025-02-13T19:34:31.965983752Z" level=info msg="Ensure that sandbox a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7 in task-service has been cleanup successfully" Feb 13 19:34:31.966201 containerd[1471]: time="2025-02-13T19:34:31.966176887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:2,}" Feb 13 19:34:31.966369 containerd[1471]: time="2025-02-13T19:34:31.966348941Z" level=info msg="TearDown network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" successfully" Feb 13 19:34:31.966369 containerd[1471]: time="2025-02-13T19:34:31.966367046Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" returns successfully" Feb 13 19:34:31.966772 containerd[1471]: time="2025-02-13T19:34:31.966615409Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:34:31.966772 containerd[1471]: time="2025-02-13T19:34:31.966701146Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:34:31.966772 containerd[1471]: time="2025-02-13T19:34:31.966712106Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:34:31.967466 containerd[1471]: time="2025-02-13T19:34:31.967435993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:34:31.970671 kubelet[2607]: I0213 19:34:31.970638 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b" Feb 13 19:34:31.971247 containerd[1471]: time="2025-02-13T19:34:31.971217741Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:34:31.971582 containerd[1471]: time="2025-02-13T19:34:31.971431505Z" level=info msg="Ensure that sandbox c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b in task-service has been cleanup successfully" Feb 13 19:34:31.971710 containerd[1471]: time="2025-02-13T19:34:31.971688004Z" level=info msg="TearDown network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" successfully" Feb 13 19:34:31.971710 containerd[1471]: time="2025-02-13T19:34:31.971708263Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" returns successfully" Feb 13 19:34:31.972061 containerd[1471]: time="2025-02-13T19:34:31.972014879Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:34:31.972187 containerd[1471]: time="2025-02-13T19:34:31.972137597Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:34:31.972187 containerd[1471]: time="2025-02-13T19:34:31.972184098Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:34:31.972608 kubelet[2607]: I0213 19:34:31.972437 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7" Feb 13 19:34:31.972608 kubelet[2607]: E0213 19:34:31.972442 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:31.972741 containerd[1471]: time="2025-02-13T19:34:31.972721222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:2,}" Feb 13 19:34:31.973012 containerd[1471]: time="2025-02-13T19:34:31.972937491Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:34:31.973168 containerd[1471]: time="2025-02-13T19:34:31.973150977Z" level=info msg="Ensure that sandbox 61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7 in task-service has been cleanup successfully" Feb 13 19:34:31.973333 containerd[1471]: time="2025-02-13T19:34:31.973298734Z" level=info msg="TearDown network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" successfully" Feb 13 19:34:31.973365 containerd[1471]: time="2025-02-13T19:34:31.973333351Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" returns successfully" Feb 13 19:34:31.973616 containerd[1471]: time="2025-02-13T19:34:31.973586623Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:34:31.973699 containerd[1471]: time="2025-02-13T19:34:31.973657100Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:34:31.973699 containerd[1471]: time="2025-02-13T19:34:31.973695345Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:34:31.974007 kubelet[2607]: I0213 19:34:31.973850 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7" Feb 13 19:34:31.974094 containerd[1471]: time="2025-02-13T19:34:31.974073028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:2,}" Feb 13 19:34:31.974253 containerd[1471]: time="2025-02-13T19:34:31.974230474Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:34:31.974409 containerd[1471]: time="2025-02-13T19:34:31.974388500Z" level=info msg="Ensure that sandbox 64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7 in task-service has been cleanup successfully" Feb 13 19:34:31.974650 containerd[1471]: time="2025-02-13T19:34:31.974571146Z" level=info msg="TearDown network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" successfully" Feb 13 19:34:31.974650 containerd[1471]: time="2025-02-13T19:34:31.974587718Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" returns successfully" Feb 13 19:34:31.974813 containerd[1471]: time="2025-02-13T19:34:31.974779090Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:34:31.974875 containerd[1471]: time="2025-02-13T19:34:31.974853915Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:34:31.974875 containerd[1471]: time="2025-02-13T19:34:31.974870427Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:34:31.974972 kubelet[2607]: I0213 19:34:31.974930 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37" Feb 13 19:34:31.975532 containerd[1471]: time="2025-02-13T19:34:31.975262799Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:34:31.975532 containerd[1471]: time="2025-02-13T19:34:31.975302997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:34:31.975532 containerd[1471]: time="2025-02-13T19:34:31.975417270Z" level=info msg="Ensure that sandbox aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37 in task-service has been cleanup successfully" Feb 13 19:34:31.975772 containerd[1471]: time="2025-02-13T19:34:31.975753924Z" level=info msg="TearDown network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" successfully" Feb 13 19:34:31.975772 containerd[1471]: time="2025-02-13T19:34:31.975770406Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" returns successfully" Feb 13 19:34:31.976112 containerd[1471]: time="2025-02-13T19:34:31.976086940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:1,}" Feb 13 19:34:32.219233 containerd[1471]: time="2025-02-13T19:34:32.218881163Z" level=error msg="Failed to destroy network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.220903 containerd[1471]: time="2025-02-13T19:34:32.220742225Z" level=error msg="encountered an error cleaning up failed sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.221253 containerd[1471]: time="2025-02-13T19:34:32.221186338Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.221779 containerd[1471]: time="2025-02-13T19:34:32.221752346Z" level=error msg="Failed to destroy network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.221996 kubelet[2607]: E0213 19:34:32.221944 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.222455 kubelet[2607]: E0213 19:34:32.222024 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:32.222455 kubelet[2607]: E0213 19:34:32.222064 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:32.222455 kubelet[2607]: E0213 19:34:32.222110 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" podUID="54b6b75e-6c3e-4c6f-9680-59e42f2a9685" Feb 13 19:34:32.222602 containerd[1471]: time="2025-02-13T19:34:32.222341139Z" level=error msg="encountered an error cleaning up failed sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.222695 containerd[1471]: time="2025-02-13T19:34:32.222673264Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.224417 kubelet[2607]: E0213 19:34:32.224384 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.224469 kubelet[2607]: E0213 19:34:32.224420 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:32.224469 kubelet[2607]: E0213 19:34:32.224440 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:32.224513 kubelet[2607]: E0213 19:34:32.224476 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jsldt" podUID="f4864561-a999-4e48-83d2-08fa358e2d4a" Feb 13 19:34:32.234239 containerd[1471]: time="2025-02-13T19:34:32.234185300Z" level=error msg="Failed to destroy network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.235224 containerd[1471]: time="2025-02-13T19:34:32.235113101Z" level=error msg="encountered an error cleaning up failed sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.235224 containerd[1471]: time="2025-02-13T19:34:32.235176724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.235628 kubelet[2607]: E0213 19:34:32.235578 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.235723 kubelet[2607]: E0213 19:34:32.235649 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:32.235723 kubelet[2607]: E0213 19:34:32.235673 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:32.235770 kubelet[2607]: E0213 19:34:32.235726 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szjm2" podUID="0d54ca70-3d73-40a5-9a0e-85776bf4fb5e" Feb 13 19:34:32.235942 containerd[1471]: time="2025-02-13T19:34:32.235903315Z" level=error msg="Failed to destroy network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.238535 containerd[1471]: time="2025-02-13T19:34:32.238304405Z" level=error msg="encountered an error cleaning up failed sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.238535 containerd[1471]: time="2025-02-13T19:34:32.238382858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.238846 kubelet[2607]: E0213 19:34:32.238788 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.239084 kubelet[2607]: E0213 19:34:32.239050 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:32.239119 kubelet[2607]: E0213 19:34:32.239100 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:32.239186 kubelet[2607]: E0213 19:34:32.239153 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:32.239508 containerd[1471]: time="2025-02-13T19:34:32.239458586Z" level=error msg="Failed to destroy network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.240602 containerd[1471]: time="2025-02-13T19:34:32.240562348Z" level=error msg="encountered an error cleaning up failed sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.240668 containerd[1471]: time="2025-02-13T19:34:32.240645550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.240948 kubelet[2607]: E0213 19:34:32.240913 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.241045 kubelet[2607]: E0213 19:34:32.240999 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:32.241086 kubelet[2607]: E0213 19:34:32.241046 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:32.241086 kubelet[2607]: E0213 19:34:32.241079 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" podUID="30182d20-c572-4a40-ab8d-be90016a3c84" Feb 13 19:34:32.255793 containerd[1471]: time="2025-02-13T19:34:32.255566252Z" level=error msg="Failed to destroy network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.256101 containerd[1471]: time="2025-02-13T19:34:32.256070572Z" level=error msg="encountered an error cleaning up failed sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.256142 containerd[1471]: time="2025-02-13T19:34:32.256129226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.256424 kubelet[2607]: E0213 19:34:32.256368 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:32.256424 kubelet[2607]: E0213 19:34:32.256438 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:32.256637 kubelet[2607]: E0213 19:34:32.256461 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:32.256637 kubelet[2607]: E0213 19:34:32.256518 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" podUID="1aaf1ef7-2705-4815-a824-0f60456d76fc" Feb 13 19:34:32.590097 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7-shm.mount: Deactivated successfully. Feb 13 19:34:32.590212 systemd[1]: run-netns-cni\x2d3bf6c2c4\x2ddf43\x2d4124\x2da36c\x2dbd9877eb0c68.mount: Deactivated successfully. Feb 13 19:34:32.590288 systemd[1]: run-netns-cni\x2d7b1c34a5\x2d731b\x2d9c66\x2d806b\x2d02032fd02c8c.mount: Deactivated successfully. Feb 13 19:34:32.590374 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7-shm.mount: Deactivated successfully. Feb 13 19:34:32.590454 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2-shm.mount: Deactivated successfully. Feb 13 19:34:32.590531 systemd[1]: run-netns-cni\x2de25b62b8\x2d8156\x2d2c7e\x2d0bc3\x2d9dbf293435bd.mount: Deactivated successfully. Feb 13 19:34:32.590602 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b-shm.mount: Deactivated successfully. Feb 13 19:34:32.590702 systemd[1]: run-netns-cni\x2d44939c50\x2dbeee\x2d1e67\x2d8680\x2dad74c3dca36e.mount: Deactivated successfully. Feb 13 19:34:32.590772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37-shm.mount: Deactivated successfully. Feb 13 19:34:32.978371 kubelet[2607]: I0213 19:34:32.978231 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115" Feb 13 19:34:32.978889 containerd[1471]: time="2025-02-13T19:34:32.978848941Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" Feb 13 19:34:32.979179 containerd[1471]: time="2025-02-13T19:34:32.979127552Z" level=info msg="Ensure that sandbox a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115 in task-service has been cleanup successfully" Feb 13 19:34:32.979448 containerd[1471]: time="2025-02-13T19:34:32.979414909Z" level=info msg="TearDown network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" successfully" Feb 13 19:34:32.979448 containerd[1471]: time="2025-02-13T19:34:32.979438115Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" returns successfully" Feb 13 19:34:32.981529 containerd[1471]: time="2025-02-13T19:34:32.981461112Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:34:32.981733 containerd[1471]: time="2025-02-13T19:34:32.981632544Z" level=info msg="TearDown network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" successfully" Feb 13 19:34:32.981733 containerd[1471]: time="2025-02-13T19:34:32.981653916Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" returns successfully" Feb 13 19:34:32.982050 systemd[1]: run-netns-cni\x2d44dcb66d\x2d9267\x2d9512\x2d3b20\x2db30a18582780.mount: Deactivated successfully. Feb 13 19:34:32.982186 kubelet[2607]: I0213 19:34:32.982068 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c" Feb 13 19:34:32.982706 containerd[1471]: time="2025-02-13T19:34:32.982671732Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" Feb 13 19:34:32.983118 containerd[1471]: time="2025-02-13T19:34:32.982807696Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:34:32.983205 containerd[1471]: time="2025-02-13T19:34:32.983188375Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:34:32.983235 containerd[1471]: time="2025-02-13T19:34:32.983205718Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:34:32.983833 kubelet[2607]: I0213 19:34:32.983535 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90" Feb 13 19:34:32.983887 containerd[1471]: time="2025-02-13T19:34:32.983736318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:3,}" Feb 13 19:34:32.984032 containerd[1471]: time="2025-02-13T19:34:32.983995150Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" Feb 13 19:34:32.984213 containerd[1471]: time="2025-02-13T19:34:32.984192844Z" level=info msg="Ensure that sandbox f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90 in task-service has been cleanup successfully" Feb 13 19:34:32.985864 kubelet[2607]: I0213 19:34:32.985629 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f" Feb 13 19:34:32.986307 containerd[1471]: time="2025-02-13T19:34:32.986264886Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" Feb 13 19:34:32.986545 containerd[1471]: time="2025-02-13T19:34:32.986515282Z" level=info msg="Ensure that sandbox 345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f in task-service has been cleanup successfully" Feb 13 19:34:32.986572 containerd[1471]: time="2025-02-13T19:34:32.986523629Z" level=info msg="TearDown network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" successfully" Feb 13 19:34:32.986572 containerd[1471]: time="2025-02-13T19:34:32.986561752Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" returns successfully" Feb 13 19:34:32.986712 systemd[1]: run-netns-cni\x2d53e69a6d\x2d5840\x2de6f0\x2d79c6\x2d8c107151cdab.mount: Deactivated successfully. Feb 13 19:34:32.987114 containerd[1471]: time="2025-02-13T19:34:32.986846525Z" level=info msg="TearDown network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" successfully" Feb 13 19:34:32.987114 containerd[1471]: time="2025-02-13T19:34:32.986863809Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" returns successfully" Feb 13 19:34:32.987114 containerd[1471]: time="2025-02-13T19:34:32.986915729Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:34:32.987114 containerd[1471]: time="2025-02-13T19:34:32.987008579Z" level=info msg="TearDown network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" successfully" Feb 13 19:34:32.987114 containerd[1471]: time="2025-02-13T19:34:32.987018138Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" returns successfully" Feb 13 19:34:32.987519 containerd[1471]: time="2025-02-13T19:34:32.987495676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:2,}" Feb 13 19:34:32.987778 containerd[1471]: time="2025-02-13T19:34:32.987759948Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:34:32.987867 containerd[1471]: time="2025-02-13T19:34:32.987843019Z" level=info msg="TearDown network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" successfully" Feb 13 19:34:32.987910 containerd[1471]: time="2025-02-13T19:34:32.987864702Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" returns successfully" Feb 13 19:34:32.988267 containerd[1471]: time="2025-02-13T19:34:32.988226354Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:34:32.988376 containerd[1471]: time="2025-02-13T19:34:32.988310767Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:34:32.988376 containerd[1471]: time="2025-02-13T19:34:32.988329855Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:34:32.988502 kubelet[2607]: E0213 19:34:32.988480 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:32.988667 kubelet[2607]: I0213 19:34:32.988636 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff" Feb 13 19:34:32.989007 containerd[1471]: time="2025-02-13T19:34:32.988984125Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" Feb 13 19:34:32.989062 containerd[1471]: time="2025-02-13T19:34:32.989031346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:3,}" Feb 13 19:34:32.989158 containerd[1471]: time="2025-02-13T19:34:32.989137753Z" level=info msg="Ensure that sandbox 06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff in task-service has been cleanup successfully" Feb 13 19:34:32.989513 containerd[1471]: time="2025-02-13T19:34:32.989482752Z" level=info msg="TearDown network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" successfully" Feb 13 19:34:32.989513 containerd[1471]: time="2025-02-13T19:34:32.989503082Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" returns successfully" Feb 13 19:34:32.989735 systemd[1]: run-netns-cni\x2db7ec0d4f\x2d522a\x2d63e2\x2da000\x2df03686f768eb.mount: Deactivated successfully. Feb 13 19:34:32.989882 containerd[1471]: time="2025-02-13T19:34:32.989862579Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:34:32.989991 containerd[1471]: time="2025-02-13T19:34:32.989955490Z" level=info msg="TearDown network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" successfully" Feb 13 19:34:32.989991 containerd[1471]: time="2025-02-13T19:34:32.989985819Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" returns successfully" Feb 13 19:34:32.990523 containerd[1471]: time="2025-02-13T19:34:32.990308245Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:34:32.990523 containerd[1471]: time="2025-02-13T19:34:32.990392768Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:34:32.990523 containerd[1471]: time="2025-02-13T19:34:32.990402147Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:34:32.991120 kubelet[2607]: I0213 19:34:32.991104 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d" Feb 13 19:34:32.991235 containerd[1471]: time="2025-02-13T19:34:32.991218292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:34:32.991727 containerd[1471]: time="2025-02-13T19:34:32.991684145Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" Feb 13 19:34:32.991866 containerd[1471]: time="2025-02-13T19:34:32.991851480Z" level=info msg="Ensure that sandbox 129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d in task-service has been cleanup successfully" Feb 13 19:34:32.992010 containerd[1471]: time="2025-02-13T19:34:32.991996251Z" level=info msg="TearDown network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" successfully" Feb 13 19:34:32.992059 containerd[1471]: time="2025-02-13T19:34:32.992009247Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" returns successfully" Feb 13 19:34:32.992219 containerd[1471]: time="2025-02-13T19:34:32.992198383Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:34:32.992303 containerd[1471]: time="2025-02-13T19:34:32.992286625Z" level=info msg="TearDown network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" successfully" Feb 13 19:34:32.992337 containerd[1471]: time="2025-02-13T19:34:32.992303107Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" returns successfully" Feb 13 19:34:32.992580 containerd[1471]: time="2025-02-13T19:34:32.992560607Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:34:32.992666 containerd[1471]: time="2025-02-13T19:34:32.992651052Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:34:32.992688 containerd[1471]: time="2025-02-13T19:34:32.992666903Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:34:32.993170 kubelet[2607]: E0213 19:34:32.992941 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:32.993348 systemd[1]: run-netns-cni\x2d7192a1d0\x2dd1db\x2dff73\x2ddbdc\x2dd96943f1e19f.mount: Deactivated successfully. Feb 13 19:34:32.993551 containerd[1471]: time="2025-02-13T19:34:32.993517194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:3,}" Feb 13 19:34:33.022845 containerd[1471]: time="2025-02-13T19:34:33.022751223Z" level=info msg="Ensure that sandbox 54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c in task-service has been cleanup successfully" Feb 13 19:34:33.027052 containerd[1471]: time="2025-02-13T19:34:33.027014041Z" level=info msg="TearDown network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" successfully" Feb 13 19:34:33.027052 containerd[1471]: time="2025-02-13T19:34:33.027040733Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" returns successfully" Feb 13 19:34:33.027736 containerd[1471]: time="2025-02-13T19:34:33.027703798Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:34:33.027807 containerd[1471]: time="2025-02-13T19:34:33.027787582Z" level=info msg="TearDown network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" successfully" Feb 13 19:34:33.027807 containerd[1471]: time="2025-02-13T19:34:33.027800325Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" returns successfully" Feb 13 19:34:33.028287 containerd[1471]: time="2025-02-13T19:34:33.028251320Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:34:33.028555 containerd[1471]: time="2025-02-13T19:34:33.028507478Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:34:33.028555 containerd[1471]: time="2025-02-13T19:34:33.028529029Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:34:33.030000 containerd[1471]: time="2025-02-13T19:34:33.029938824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:34:33.214561 containerd[1471]: time="2025-02-13T19:34:33.214223229Z" level=error msg="Failed to destroy network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.215464 containerd[1471]: time="2025-02-13T19:34:33.215440609Z" level=error msg="Failed to destroy network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.216385 containerd[1471]: time="2025-02-13T19:34:33.216362328Z" level=error msg="encountered an error cleaning up failed sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.216512 containerd[1471]: time="2025-02-13T19:34:33.216493001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.218538 kubelet[2607]: E0213 19:34:33.218115 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.218538 kubelet[2607]: E0213 19:34:33.218183 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:33.218538 kubelet[2607]: E0213 19:34:33.218208 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:33.218699 kubelet[2607]: E0213 19:34:33.218252 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" podUID="30182d20-c572-4a40-ab8d-be90016a3c84" Feb 13 19:34:33.220666 containerd[1471]: time="2025-02-13T19:34:33.220586691Z" level=error msg="encountered an error cleaning up failed sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.221115 containerd[1471]: time="2025-02-13T19:34:33.221084045Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.224107 containerd[1471]: time="2025-02-13T19:34:33.223110094Z" level=error msg="Failed to destroy network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.224107 containerd[1471]: time="2025-02-13T19:34:33.223554125Z" level=error msg="encountered an error cleaning up failed sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.224107 containerd[1471]: time="2025-02-13T19:34:33.223636676Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.224462 kubelet[2607]: E0213 19:34:33.224435 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.224655 kubelet[2607]: E0213 19:34:33.224542 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:33.224655 kubelet[2607]: E0213 19:34:33.224565 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:33.224655 kubelet[2607]: E0213 19:34:33.224612 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" podUID="54b6b75e-6c3e-4c6f-9680-59e42f2a9685" Feb 13 19:34:33.225769 kubelet[2607]: E0213 19:34:33.224401 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.225769 kubelet[2607]: E0213 19:34:33.224854 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:33.225769 kubelet[2607]: E0213 19:34:33.224875 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:33.225877 kubelet[2607]: E0213 19:34:33.224915 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jsldt" podUID="f4864561-a999-4e48-83d2-08fa358e2d4a" Feb 13 19:34:33.255018 containerd[1471]: time="2025-02-13T19:34:33.254935568Z" level=error msg="Failed to destroy network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.255173 containerd[1471]: time="2025-02-13T19:34:33.255073254Z" level=error msg="Failed to destroy network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.255458 containerd[1471]: time="2025-02-13T19:34:33.255422973Z" level=error msg="encountered an error cleaning up failed sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.255506 containerd[1471]: time="2025-02-13T19:34:33.255493649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.255567 containerd[1471]: time="2025-02-13T19:34:33.255539999Z" level=error msg="encountered an error cleaning up failed sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.255632 containerd[1471]: time="2025-02-13T19:34:33.255612741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.255820 kubelet[2607]: E0213 19:34:33.255758 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.255883 kubelet[2607]: E0213 19:34:33.255855 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:33.255916 kubelet[2607]: E0213 19:34:33.255879 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:33.255999 kubelet[2607]: E0213 19:34:33.255925 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:33.257357 kubelet[2607]: E0213 19:34:33.257189 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.257357 kubelet[2607]: E0213 19:34:33.257232 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:33.257357 kubelet[2607]: E0213 19:34:33.257251 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:33.257594 kubelet[2607]: E0213 19:34:33.257300 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" podUID="1aaf1ef7-2705-4815-a824-0f60456d76fc" Feb 13 19:34:33.264577 containerd[1471]: time="2025-02-13T19:34:33.264493965Z" level=error msg="Failed to destroy network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.265047 containerd[1471]: time="2025-02-13T19:34:33.265008603Z" level=error msg="encountered an error cleaning up failed sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.265144 containerd[1471]: time="2025-02-13T19:34:33.265090472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.265540 kubelet[2607]: E0213 19:34:33.265492 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:33.265631 kubelet[2607]: E0213 19:34:33.265569 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:33.265631 kubelet[2607]: E0213 19:34:33.265595 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:33.265676 kubelet[2607]: E0213 19:34:33.265638 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szjm2" podUID="0d54ca70-3d73-40a5-9a0e-85776bf4fb5e" Feb 13 19:34:33.593104 systemd[1]: run-netns-cni\x2d8cab5073\x2d0be4\x2dea83\x2dd4cc\x2d42a9de241575.mount: Deactivated successfully. Feb 13 19:34:33.593246 systemd[1]: run-netns-cni\x2dfa4d1bde\x2d4cfb\x2d5094\x2d2ab6\x2d3b68f67524eb.mount: Deactivated successfully. Feb 13 19:34:33.998049 kubelet[2607]: I0213 19:34:33.997995 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8" Feb 13 19:34:34.003130 kubelet[2607]: I0213 19:34:34.003086 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e" Feb 13 19:34:34.004114 containerd[1471]: time="2025-02-13T19:34:34.004020335Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\"" Feb 13 19:34:34.005248 containerd[1471]: time="2025-02-13T19:34:34.004313302Z" level=info msg="Ensure that sandbox 7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e in task-service has been cleanup successfully" Feb 13 19:34:34.005248 containerd[1471]: time="2025-02-13T19:34:34.004881905Z" level=info msg="TearDown network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" successfully" Feb 13 19:34:34.005248 containerd[1471]: time="2025-02-13T19:34:34.004900080Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" returns successfully" Feb 13 19:34:34.006167 containerd[1471]: time="2025-02-13T19:34:34.005464263Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" Feb 13 19:34:34.006167 containerd[1471]: time="2025-02-13T19:34:34.005562163Z" level=info msg="TearDown network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" successfully" Feb 13 19:34:34.006167 containerd[1471]: time="2025-02-13T19:34:34.005576380Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" returns successfully" Feb 13 19:34:34.008439 containerd[1471]: time="2025-02-13T19:34:34.008349664Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:34:34.008790 containerd[1471]: time="2025-02-13T19:34:34.008541065Z" level=info msg="TearDown network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" successfully" Feb 13 19:34:34.008790 containerd[1471]: time="2025-02-13T19:34:34.008563488Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" returns successfully" Feb 13 19:34:34.011553 systemd[1]: run-netns-cni\x2d958e526a\x2dcfff\x2d7319\x2da460\x2dd39336ef6e70.mount: Deactivated successfully. Feb 13 19:34:34.012663 containerd[1471]: time="2025-02-13T19:34:34.012428047Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\"" Feb 13 19:34:34.012663 containerd[1471]: time="2025-02-13T19:34:34.012535937Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:34:34.012663 containerd[1471]: time="2025-02-13T19:34:34.012625580Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:34:34.012663 containerd[1471]: time="2025-02-13T19:34:34.012641581Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:34:34.012776 kubelet[2607]: I0213 19:34:34.011836 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da" Feb 13 19:34:34.014324 containerd[1471]: time="2025-02-13T19:34:34.014068116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:34:34.019449 kubelet[2607]: I0213 19:34:34.019411 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7" Feb 13 19:34:34.020024 containerd[1471]: time="2025-02-13T19:34:34.019992085Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\"" Feb 13 19:34:34.020258 containerd[1471]: time="2025-02-13T19:34:34.020229345Z" level=info msg="Ensure that sandbox d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7 in task-service has been cleanup successfully" Feb 13 19:34:34.020683 containerd[1471]: time="2025-02-13T19:34:34.020460603Z" level=info msg="TearDown network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" successfully" Feb 13 19:34:34.020683 containerd[1471]: time="2025-02-13T19:34:34.020481484Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" returns successfully" Feb 13 19:34:34.021249 containerd[1471]: time="2025-02-13T19:34:34.021207169Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" Feb 13 19:34:34.021327 containerd[1471]: time="2025-02-13T19:34:34.021306612Z" level=info msg="TearDown network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" successfully" Feb 13 19:34:34.021327 containerd[1471]: time="2025-02-13T19:34:34.021324898Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" returns successfully" Feb 13 19:34:34.021749 containerd[1471]: time="2025-02-13T19:34:34.021713341Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:34:34.021949 containerd[1471]: time="2025-02-13T19:34:34.021812573Z" level=info msg="TearDown network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" successfully" Feb 13 19:34:34.021949 containerd[1471]: time="2025-02-13T19:34:34.021830437Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" returns successfully" Feb 13 19:34:34.023019 containerd[1471]: time="2025-02-13T19:34:34.022201777Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:34:34.023019 containerd[1471]: time="2025-02-13T19:34:34.022293515Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:34:34.023019 containerd[1471]: time="2025-02-13T19:34:34.022305668Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:34:34.023136 kubelet[2607]: E0213 19:34:34.022543 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:34.023163 containerd[1471]: time="2025-02-13T19:34:34.023133523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:4,}" Feb 13 19:34:34.023587 kubelet[2607]: I0213 19:34:34.023564 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958" Feb 13 19:34:34.023661 systemd[1]: run-netns-cni\x2de1f351a5\x2d7b61\x2d869f\x2dbb56\x2d5cdeda509c97.mount: Deactivated successfully. Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.024088473Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\"" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.024263412Z" level=info msg="Ensure that sandbox d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958 in task-service has been cleanup successfully" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.024498998Z" level=info msg="TearDown network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" successfully" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.024515500Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" returns successfully" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.025155230Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.025274371Z" level=info msg="TearDown network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" successfully" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.025287065Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" returns successfully" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.027448845Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.027542135Z" level=info msg="TearDown network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" successfully" Feb 13 19:34:34.029685 containerd[1471]: time="2025-02-13T19:34:34.027553848Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" returns successfully" Feb 13 19:34:34.026684 systemd[1]: run-netns-cni\x2d8bbbd7e1\x2d5044\x2d36c9\x2ddf09\x2db80987c0ed32.mount: Deactivated successfully. Feb 13 19:34:34.030074 containerd[1471]: time="2025-02-13T19:34:34.029833025Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:34:34.030074 containerd[1471]: time="2025-02-13T19:34:34.029925765Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:34:34.030074 containerd[1471]: time="2025-02-13T19:34:34.029936846Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:34:34.030560 containerd[1471]: time="2025-02-13T19:34:34.030521218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:34:34.034509 kubelet[2607]: I0213 19:34:34.034476 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e" Feb 13 19:34:34.035020 containerd[1471]: time="2025-02-13T19:34:34.034990669Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\"" Feb 13 19:34:34.035233 containerd[1471]: time="2025-02-13T19:34:34.035204814Z" level=info msg="Ensure that sandbox 1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e in task-service has been cleanup successfully" Feb 13 19:34:34.037373 containerd[1471]: time="2025-02-13T19:34:34.035809196Z" level=info msg="TearDown network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" successfully" Feb 13 19:34:34.037373 containerd[1471]: time="2025-02-13T19:34:34.035832891Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" returns successfully" Feb 13 19:34:34.037373 containerd[1471]: time="2025-02-13T19:34:34.036588986Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" Feb 13 19:34:34.037373 containerd[1471]: time="2025-02-13T19:34:34.036678319Z" level=info msg="TearDown network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" successfully" Feb 13 19:34:34.037373 containerd[1471]: time="2025-02-13T19:34:34.036691605Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" returns successfully" Feb 13 19:34:34.038076 containerd[1471]: time="2025-02-13T19:34:34.038029308Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:34:34.038237 containerd[1471]: time="2025-02-13T19:34:34.038125615Z" level=info msg="TearDown network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" successfully" Feb 13 19:34:34.038237 containerd[1471]: time="2025-02-13T19:34:34.038138950Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" returns successfully" Feb 13 19:34:34.038873 containerd[1471]: time="2025-02-13T19:34:34.038539255Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:34:34.038873 containerd[1471]: time="2025-02-13T19:34:34.038629460Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:34:34.038873 containerd[1471]: time="2025-02-13T19:34:34.038640803Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:34:34.038558 systemd[1]: run-netns-cni\x2db439cb6d\x2d1805\x2da5a6\x2d4899\x2d4cce31f85fd5.mount: Deactivated successfully. Feb 13 19:34:34.039118 kubelet[2607]: E0213 19:34:34.038908 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:34.039523 containerd[1471]: time="2025-02-13T19:34:34.039208262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:4,}" Feb 13 19:34:34.112470 containerd[1471]: time="2025-02-13T19:34:34.112225014Z" level=info msg="Ensure that sandbox b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da in task-service has been cleanup successfully" Feb 13 19:34:34.112617 containerd[1471]: time="2025-02-13T19:34:34.112601322Z" level=info msg="TearDown network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" successfully" Feb 13 19:34:34.112677 containerd[1471]: time="2025-02-13T19:34:34.112623296Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" returns successfully" Feb 13 19:34:34.113133 containerd[1471]: time="2025-02-13T19:34:34.113102384Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" Feb 13 19:34:34.113324 containerd[1471]: time="2025-02-13T19:34:34.113187128Z" level=info msg="TearDown network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" successfully" Feb 13 19:34:34.113324 containerd[1471]: time="2025-02-13T19:34:34.113196126Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" returns successfully" Feb 13 19:34:34.114437 containerd[1471]: time="2025-02-13T19:34:34.114385069Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:34:34.114478 containerd[1471]: time="2025-02-13T19:34:34.114462660Z" level=info msg="TearDown network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" successfully" Feb 13 19:34:34.114478 containerd[1471]: time="2025-02-13T19:34:34.114472017Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" returns successfully" Feb 13 19:34:34.115899 containerd[1471]: time="2025-02-13T19:34:34.115762769Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\"" Feb 13 19:34:34.116120 containerd[1471]: time="2025-02-13T19:34:34.116091166Z" level=info msg="Ensure that sandbox 0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8 in task-service has been cleanup successfully" Feb 13 19:34:34.116800 containerd[1471]: time="2025-02-13T19:34:34.116778467Z" level=info msg="TearDown network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" successfully" Feb 13 19:34:34.116800 containerd[1471]: time="2025-02-13T19:34:34.116797264Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" returns successfully" Feb 13 19:34:34.117248 containerd[1471]: time="2025-02-13T19:34:34.117217868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:3,}" Feb 13 19:34:34.117713 containerd[1471]: time="2025-02-13T19:34:34.117664264Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" Feb 13 19:34:34.117896 containerd[1471]: time="2025-02-13T19:34:34.117757524Z" level=info msg="TearDown network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" successfully" Feb 13 19:34:34.117896 containerd[1471]: time="2025-02-13T19:34:34.117773495Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" returns successfully" Feb 13 19:34:34.118280 containerd[1471]: time="2025-02-13T19:34:34.118256261Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:34:34.118373 containerd[1471]: time="2025-02-13T19:34:34.118343560Z" level=info msg="TearDown network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" successfully" Feb 13 19:34:34.118422 containerd[1471]: time="2025-02-13T19:34:34.118362617Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" returns successfully" Feb 13 19:34:34.118755 containerd[1471]: time="2025-02-13T19:34:34.118720370Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:34:34.118908 containerd[1471]: time="2025-02-13T19:34:34.118796037Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:34:34.118908 containerd[1471]: time="2025-02-13T19:34:34.118804984Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:34:34.120095 containerd[1471]: time="2025-02-13T19:34:34.119934873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:4,}" Feb 13 19:34:34.283801 containerd[1471]: time="2025-02-13T19:34:34.283675059Z" level=error msg="Failed to destroy network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.284884 containerd[1471]: time="2025-02-13T19:34:34.284853682Z" level=error msg="encountered an error cleaning up failed sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.284999 containerd[1471]: time="2025-02-13T19:34:34.284945921Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.285294 kubelet[2607]: E0213 19:34:34.285253 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.285422 kubelet[2607]: E0213 19:34:34.285326 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:34.285422 kubelet[2607]: E0213 19:34:34.285358 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:34.285511 kubelet[2607]: E0213 19:34:34.285419 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jsldt" podUID="f4864561-a999-4e48-83d2-08fa358e2d4a" Feb 13 19:34:34.595876 systemd[1]: run-netns-cni\x2d77289eaa\x2d3d38\x2d86bb\x2d7e24\x2d4f8da27acee0.mount: Deactivated successfully. Feb 13 19:34:34.596476 systemd[1]: run-netns-cni\x2dde79da49\x2dd0cb\x2d0ffe\x2d118c\x2d63a9f3fb1bba.mount: Deactivated successfully. Feb 13 19:34:34.618113 containerd[1471]: time="2025-02-13T19:34:34.618049547Z" level=error msg="Failed to destroy network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.621533 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c-shm.mount: Deactivated successfully. Feb 13 19:34:34.622037 containerd[1471]: time="2025-02-13T19:34:34.621971497Z" level=error msg="encountered an error cleaning up failed sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.622203 containerd[1471]: time="2025-02-13T19:34:34.622180923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.623683 kubelet[2607]: E0213 19:34:34.622558 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.623683 kubelet[2607]: E0213 19:34:34.622625 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:34.623683 kubelet[2607]: E0213 19:34:34.622646 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:34.623796 kubelet[2607]: E0213 19:34:34.622687 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" podUID="1aaf1ef7-2705-4815-a824-0f60456d76fc" Feb 13 19:34:34.650147 containerd[1471]: time="2025-02-13T19:34:34.650098481Z" level=error msg="Failed to destroy network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.650722 containerd[1471]: time="2025-02-13T19:34:34.650693063Z" level=error msg="encountered an error cleaning up failed sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.650871 containerd[1471]: time="2025-02-13T19:34:34.650845539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.651644 kubelet[2607]: E0213 19:34:34.651191 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.651644 kubelet[2607]: E0213 19:34:34.651253 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:34.651644 kubelet[2607]: E0213 19:34:34.651275 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:34.651805 kubelet[2607]: E0213 19:34:34.651320 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:34.653564 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b-shm.mount: Deactivated successfully. Feb 13 19:34:34.663974 containerd[1471]: time="2025-02-13T19:34:34.663916929Z" level=error msg="Failed to destroy network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.664576 containerd[1471]: time="2025-02-13T19:34:34.664467466Z" level=error msg="encountered an error cleaning up failed sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.664576 containerd[1471]: time="2025-02-13T19:34:34.664530057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.668681 kubelet[2607]: E0213 19:34:34.664892 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.668681 kubelet[2607]: E0213 19:34:34.664967 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:34.668681 kubelet[2607]: E0213 19:34:34.664993 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:34.668188 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278-shm.mount: Deactivated successfully. Feb 13 19:34:34.668996 kubelet[2607]: E0213 19:34:34.665046 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" podUID="30182d20-c572-4a40-ab8d-be90016a3c84" Feb 13 19:34:34.677235 containerd[1471]: time="2025-02-13T19:34:34.677068383Z" level=error msg="Failed to destroy network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.678993 containerd[1471]: time="2025-02-13T19:34:34.678758098Z" level=error msg="Failed to destroy network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.679058 containerd[1471]: time="2025-02-13T19:34:34.678940461Z" level=error msg="encountered an error cleaning up failed sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.679178 containerd[1471]: time="2025-02-13T19:34:34.679140850Z" level=error msg="encountered an error cleaning up failed sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.679230 containerd[1471]: time="2025-02-13T19:34:34.679205745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.679324 containerd[1471]: time="2025-02-13T19:34:34.679284147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.679539 kubelet[2607]: E0213 19:34:34.679481 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.679595 kubelet[2607]: E0213 19:34:34.679564 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:34.679653 kubelet[2607]: E0213 19:34:34.679589 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:34.679722 kubelet[2607]: E0213 19:34:34.679644 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szjm2" podUID="0d54ca70-3d73-40a5-9a0e-85776bf4fb5e" Feb 13 19:34:34.680141 kubelet[2607]: E0213 19:34:34.680072 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:34.680242 kubelet[2607]: E0213 19:34:34.680220 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:34.680330 kubelet[2607]: E0213 19:34:34.680311 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:34.680451 kubelet[2607]: E0213 19:34:34.680427 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" podUID="54b6b75e-6c3e-4c6f-9680-59e42f2a9685" Feb 13 19:34:34.681282 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037-shm.mount: Deactivated successfully. Feb 13 19:34:35.040556 kubelet[2607]: I0213 19:34:35.040499 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e" Feb 13 19:34:35.043164 containerd[1471]: time="2025-02-13T19:34:35.041271871Z" level=info msg="StopPodSandbox for \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\"" Feb 13 19:34:35.043164 containerd[1471]: time="2025-02-13T19:34:35.042821442Z" level=info msg="Ensure that sandbox 97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e in task-service has been cleanup successfully" Feb 13 19:34:35.044108 containerd[1471]: time="2025-02-13T19:34:35.044024120Z" level=info msg="TearDown network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" successfully" Feb 13 19:34:35.044108 containerd[1471]: time="2025-02-13T19:34:35.044052876Z" level=info msg="StopPodSandbox for \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" returns successfully" Feb 13 19:34:35.044787 kubelet[2607]: I0213 19:34:35.044298 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639" Feb 13 19:34:35.044951 containerd[1471]: time="2025-02-13T19:34:35.044925125Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\"" Feb 13 19:34:35.045170 containerd[1471]: time="2025-02-13T19:34:35.045007545Z" level=info msg="StopPodSandbox for \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\"" Feb 13 19:34:35.045170 containerd[1471]: time="2025-02-13T19:34:35.045109913Z" level=info msg="TearDown network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" successfully" Feb 13 19:34:35.045170 containerd[1471]: time="2025-02-13T19:34:35.045123900Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" returns successfully" Feb 13 19:34:35.045266 containerd[1471]: time="2025-02-13T19:34:35.045250385Z" level=info msg="Ensure that sandbox c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639 in task-service has been cleanup successfully" Feb 13 19:34:35.045887 containerd[1471]: time="2025-02-13T19:34:35.045849625Z" level=info msg="TearDown network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" successfully" Feb 13 19:34:35.045887 containerd[1471]: time="2025-02-13T19:34:35.045883160Z" level=info msg="StopPodSandbox for \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" returns successfully" Feb 13 19:34:35.046080 containerd[1471]: time="2025-02-13T19:34:35.045933597Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" Feb 13 19:34:35.046828 containerd[1471]: time="2025-02-13T19:34:35.046053530Z" level=info msg="TearDown network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" successfully" Feb 13 19:34:35.046828 containerd[1471]: time="2025-02-13T19:34:35.046721953Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" returns successfully" Feb 13 19:34:35.047139 containerd[1471]: time="2025-02-13T19:34:35.047113552Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\"" Feb 13 19:34:35.047236 containerd[1471]: time="2025-02-13T19:34:35.047196272Z" level=info msg="TearDown network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" successfully" Feb 13 19:34:35.047236 containerd[1471]: time="2025-02-13T19:34:35.047211151Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" returns successfully" Feb 13 19:34:35.048798 containerd[1471]: time="2025-02-13T19:34:35.048766082Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:34:35.048860 containerd[1471]: time="2025-02-13T19:34:35.048799757Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" Feb 13 19:34:35.048892 containerd[1471]: time="2025-02-13T19:34:35.048867197Z" level=info msg="TearDown network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" successfully" Feb 13 19:34:35.048892 containerd[1471]: time="2025-02-13T19:34:35.048878469Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" returns successfully" Feb 13 19:34:35.048946 containerd[1471]: time="2025-02-13T19:34:35.048919999Z" level=info msg="TearDown network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" successfully" Feb 13 19:34:35.048946 containerd[1471]: time="2025-02-13T19:34:35.048936773Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" returns successfully" Feb 13 19:34:35.052941 containerd[1471]: time="2025-02-13T19:34:35.052542203Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:34:35.052941 containerd[1471]: time="2025-02-13T19:34:35.052672896Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:34:35.052941 containerd[1471]: time="2025-02-13T19:34:35.052706351Z" level=info msg="TearDown network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" successfully" Feb 13 19:34:35.052941 containerd[1471]: time="2025-02-13T19:34:35.052722642Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" returns successfully" Feb 13 19:34:35.053241 containerd[1471]: time="2025-02-13T19:34:35.053086548Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:34:35.053241 containerd[1471]: time="2025-02-13T19:34:35.053113409Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:34:35.053360 kubelet[2607]: E0213 19:34:35.053330 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:35.056295 containerd[1471]: time="2025-02-13T19:34:35.056246956Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:34:35.056428 containerd[1471]: time="2025-02-13T19:34:35.056384663Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:34:35.056428 containerd[1471]: time="2025-02-13T19:34:35.056416465Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:34:35.056500 containerd[1471]: time="2025-02-13T19:34:35.056253449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:5,}" Feb 13 19:34:35.057175 kubelet[2607]: E0213 19:34:35.057148 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:35.057694 kubelet[2607]: I0213 19:34:35.057388 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037" Feb 13 19:34:35.057847 containerd[1471]: time="2025-02-13T19:34:35.057418575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:5,}" Feb 13 19:34:35.058199 containerd[1471]: time="2025-02-13T19:34:35.058180861Z" level=info msg="StopPodSandbox for \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\"" Feb 13 19:34:35.058482 containerd[1471]: time="2025-02-13T19:34:35.058460662Z" level=info msg="Ensure that sandbox 31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037 in task-service has been cleanup successfully" Feb 13 19:34:35.059236 containerd[1471]: time="2025-02-13T19:34:35.059218880Z" level=info msg="TearDown network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" successfully" Feb 13 19:34:35.059315 containerd[1471]: time="2025-02-13T19:34:35.059300919Z" level=info msg="StopPodSandbox for \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" returns successfully" Feb 13 19:34:35.060988 containerd[1471]: time="2025-02-13T19:34:35.060968709Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\"" Feb 13 19:34:35.061127 containerd[1471]: time="2025-02-13T19:34:35.061106516Z" level=info msg="TearDown network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" successfully" Feb 13 19:34:35.061758 containerd[1471]: time="2025-02-13T19:34:35.061658544Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" returns successfully" Feb 13 19:34:35.071464 containerd[1471]: time="2025-02-13T19:34:35.071371611Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" Feb 13 19:34:35.071609 containerd[1471]: time="2025-02-13T19:34:35.071578361Z" level=info msg="TearDown network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" successfully" Feb 13 19:34:35.071609 containerd[1471]: time="2025-02-13T19:34:35.071594152Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" returns successfully" Feb 13 19:34:35.072396 containerd[1471]: time="2025-02-13T19:34:35.072336689Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:34:35.072584 containerd[1471]: time="2025-02-13T19:34:35.072465439Z" level=info msg="TearDown network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" successfully" Feb 13 19:34:35.072584 containerd[1471]: time="2025-02-13T19:34:35.072478184Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" returns successfully" Feb 13 19:34:35.073107 containerd[1471]: time="2025-02-13T19:34:35.073060050Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:34:35.073165 containerd[1471]: time="2025-02-13T19:34:35.073152459Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:34:35.073207 containerd[1471]: time="2025-02-13T19:34:35.073165204Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:34:35.073660 containerd[1471]: time="2025-02-13T19:34:35.073635163Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:34:35.074828 kubelet[2607]: I0213 19:34:35.074791 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278" Feb 13 19:34:35.075758 containerd[1471]: time="2025-02-13T19:34:35.075665665Z" level=info msg="StopPodSandbox for \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\"" Feb 13 19:34:35.075921 containerd[1471]: time="2025-02-13T19:34:35.075891512Z" level=info msg="Ensure that sandbox ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278 in task-service has been cleanup successfully" Feb 13 19:34:35.081467 containerd[1471]: time="2025-02-13T19:34:35.081123092Z" level=info msg="TearDown network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" successfully" Feb 13 19:34:35.081467 containerd[1471]: time="2025-02-13T19:34:35.081161606Z" level=info msg="StopPodSandbox for \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" returns successfully" Feb 13 19:34:35.082712 containerd[1471]: time="2025-02-13T19:34:35.082125944Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\"" Feb 13 19:34:35.082712 containerd[1471]: time="2025-02-13T19:34:35.082274581Z" level=info msg="TearDown network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" successfully" Feb 13 19:34:35.082712 containerd[1471]: time="2025-02-13T19:34:35.082291053Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" returns successfully" Feb 13 19:34:35.083562 containerd[1471]: time="2025-02-13T19:34:35.083530754Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" Feb 13 19:34:35.083654 containerd[1471]: time="2025-02-13T19:34:35.083630737Z" level=info msg="TearDown network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" successfully" Feb 13 19:34:35.083654 containerd[1471]: time="2025-02-13T19:34:35.083650445Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" returns successfully" Feb 13 19:34:35.084101 kubelet[2607]: I0213 19:34:35.084069 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c" Feb 13 19:34:35.084236 containerd[1471]: time="2025-02-13T19:34:35.084114664Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:34:35.084522 containerd[1471]: time="2025-02-13T19:34:35.084289463Z" level=info msg="TearDown network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" successfully" Feb 13 19:34:35.084522 containerd[1471]: time="2025-02-13T19:34:35.084507815Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" returns successfully" Feb 13 19:34:35.085274 containerd[1471]: time="2025-02-13T19:34:35.085226146Z" level=info msg="StopPodSandbox for \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\"" Feb 13 19:34:35.086066 containerd[1471]: time="2025-02-13T19:34:35.085495077Z" level=info msg="Ensure that sandbox a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c in task-service has been cleanup successfully" Feb 13 19:34:35.086066 containerd[1471]: time="2025-02-13T19:34:35.085563860Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:34:35.086066 containerd[1471]: time="2025-02-13T19:34:35.085664295Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:34:35.086066 containerd[1471]: time="2025-02-13T19:34:35.085674023Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:34:35.086066 containerd[1471]: time="2025-02-13T19:34:35.085742817Z" level=info msg="TearDown network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" successfully" Feb 13 19:34:35.086066 containerd[1471]: time="2025-02-13T19:34:35.085756684Z" level=info msg="StopPodSandbox for \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" returns successfully" Feb 13 19:34:35.086461 containerd[1471]: time="2025-02-13T19:34:35.086435959Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\"" Feb 13 19:34:35.086534 containerd[1471]: time="2025-02-13T19:34:35.086511976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:5,}" Feb 13 19:34:35.086534 containerd[1471]: time="2025-02-13T19:34:35.086521635Z" level=info msg="TearDown network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" successfully" Feb 13 19:34:35.086619 containerd[1471]: time="2025-02-13T19:34:35.086540301Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" returns successfully" Feb 13 19:34:35.087018 containerd[1471]: time="2025-02-13T19:34:35.086984871Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" Feb 13 19:34:35.087930 containerd[1471]: time="2025-02-13T19:34:35.087899322Z" level=info msg="TearDown network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" successfully" Feb 13 19:34:35.087930 containerd[1471]: time="2025-02-13T19:34:35.087920994Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" returns successfully" Feb 13 19:34:35.088451 containerd[1471]: time="2025-02-13T19:34:35.088421644Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:34:35.088516 containerd[1471]: time="2025-02-13T19:34:35.088502630Z" level=info msg="TearDown network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" successfully" Feb 13 19:34:35.088516 containerd[1471]: time="2025-02-13T19:34:35.088512089Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" returns successfully" Feb 13 19:34:35.089203 kubelet[2607]: I0213 19:34:35.088766 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b" Feb 13 19:34:35.089402 containerd[1471]: time="2025-02-13T19:34:35.089368968Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:34:35.089402 containerd[1471]: time="2025-02-13T19:34:35.089400259Z" level=info msg="StopPodSandbox for \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\"" Feb 13 19:34:35.089507 containerd[1471]: time="2025-02-13T19:34:35.089458371Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:34:35.089507 containerd[1471]: time="2025-02-13T19:34:35.089470655Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:34:35.089639 containerd[1471]: time="2025-02-13T19:34:35.089610515Z" level=info msg="Ensure that sandbox 501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b in task-service has been cleanup successfully" Feb 13 19:34:35.089857 containerd[1471]: time="2025-02-13T19:34:35.089834228Z" level=info msg="TearDown network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" successfully" Feb 13 19:34:35.089857 containerd[1471]: time="2025-02-13T19:34:35.089851923Z" level=info msg="StopPodSandbox for \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" returns successfully" Feb 13 19:34:35.090011 containerd[1471]: time="2025-02-13T19:34:35.089977636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:34:35.090215 containerd[1471]: time="2025-02-13T19:34:35.090195478Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\"" Feb 13 19:34:35.090316 containerd[1471]: time="2025-02-13T19:34:35.090291714Z" level=info msg="TearDown network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" successfully" Feb 13 19:34:35.090316 containerd[1471]: time="2025-02-13T19:34:35.090312544Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" returns successfully" Feb 13 19:34:35.090691 containerd[1471]: time="2025-02-13T19:34:35.090653986Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" Feb 13 19:34:35.090813 containerd[1471]: time="2025-02-13T19:34:35.090790760Z" level=info msg="TearDown network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" successfully" Feb 13 19:34:35.090859 containerd[1471]: time="2025-02-13T19:34:35.090814156Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" returns successfully" Feb 13 19:34:35.091199 containerd[1471]: time="2025-02-13T19:34:35.091174504Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:34:35.091290 containerd[1471]: time="2025-02-13T19:34:35.091269688Z" level=info msg="TearDown network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" successfully" Feb 13 19:34:35.091353 containerd[1471]: time="2025-02-13T19:34:35.091287793Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" returns successfully" Feb 13 19:34:35.091726 containerd[1471]: time="2025-02-13T19:34:35.091687727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:4,}" Feb 13 19:34:35.588445 systemd[1]: run-netns-cni\x2de226f05c\x2d7867\x2df213\x2d978e\x2dd0e8175e48cf.mount: Deactivated successfully. Feb 13 19:34:35.588566 systemd[1]: run-netns-cni\x2d409ee740\x2d719a\x2d9441\x2db530\x2d3d8ab2995896.mount: Deactivated successfully. Feb 13 19:34:35.588638 systemd[1]: run-netns-cni\x2d50a6933a\x2d7662\x2d3cdc\x2d3bb9\x2db381420fe531.mount: Deactivated successfully. Feb 13 19:34:35.588714 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e-shm.mount: Deactivated successfully. Feb 13 19:34:35.588790 systemd[1]: run-netns-cni\x2dc8b3901a\x2dacaa\x2de273\x2d922f\x2d7271eb2c0dcc.mount: Deactivated successfully. Feb 13 19:34:35.588862 systemd[1]: run-netns-cni\x2d4b4e2864\x2d7f17\x2d4c18\x2df909\x2d7a110a8f1654.mount: Deactivated successfully. Feb 13 19:34:35.588939 systemd[1]: run-netns-cni\x2dd9906c38\x2d41c5\x2d19c0\x2dbf19\x2d1cb283d4d10c.mount: Deactivated successfully. Feb 13 19:34:35.589028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238608275.mount: Deactivated successfully. Feb 13 19:34:36.015730 containerd[1471]: time="2025-02-13T19:34:36.015669463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:36.031316 containerd[1471]: time="2025-02-13T19:34:36.031229016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:34:36.041649 containerd[1471]: time="2025-02-13T19:34:36.041599830Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:36.049094 containerd[1471]: time="2025-02-13T19:34:36.048373736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:36.049094 containerd[1471]: time="2025-02-13T19:34:36.048913800Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.098904217s" Feb 13 19:34:36.049094 containerd[1471]: time="2025-02-13T19:34:36.048942295Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:34:36.071287 containerd[1471]: time="2025-02-13T19:34:36.071234423Z" level=info msg="CreateContainer within sandbox \"8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:34:36.112485 containerd[1471]: time="2025-02-13T19:34:36.112426147Z" level=info msg="CreateContainer within sandbox \"8dcd57549d4fcc6e70465924d73a1e25442d8e69969f78e99ae72d2a3240843f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"1797fb4d6ac60b525d7e85feae8af05273722585d1d09d765048c0cdb3615c01\"" Feb 13 19:34:36.116180 containerd[1471]: time="2025-02-13T19:34:36.116131435Z" level=info msg="StartContainer for \"1797fb4d6ac60b525d7e85feae8af05273722585d1d09d765048c0cdb3615c01\"" Feb 13 19:34:36.149796 containerd[1471]: time="2025-02-13T19:34:36.149725319Z" level=error msg="Failed to destroy network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.151180 containerd[1471]: time="2025-02-13T19:34:36.151073016Z" level=error msg="encountered an error cleaning up failed sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.151180 containerd[1471]: time="2025-02-13T19:34:36.151146177Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.151437 kubelet[2607]: E0213 19:34:36.151384 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.151811 kubelet[2607]: E0213 19:34:36.151474 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:36.151811 kubelet[2607]: E0213 19:34:36.151501 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qqgd5" Feb 13 19:34:36.151811 kubelet[2607]: E0213 19:34:36.151544 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qqgd5_calico-system(f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qqgd5" podUID="f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed" Feb 13 19:34:36.175951 containerd[1471]: time="2025-02-13T19:34:36.175787401Z" level=error msg="Failed to destroy network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.176735 containerd[1471]: time="2025-02-13T19:34:36.176601034Z" level=error msg="encountered an error cleaning up failed sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.177110 containerd[1471]: time="2025-02-13T19:34:36.176696219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.178115 kubelet[2607]: E0213 19:34:36.177125 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.178115 kubelet[2607]: E0213 19:34:36.177205 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:36.178115 kubelet[2607]: E0213 19:34:36.177237 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" Feb 13 19:34:36.178239 kubelet[2607]: E0213 19:34:36.177298 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-cljgm_calico-apiserver(54b6b75e-6c3e-4c6f-9680-59e42f2a9685)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" podUID="54b6b75e-6c3e-4c6f-9680-59e42f2a9685" Feb 13 19:34:36.186236 containerd[1471]: time="2025-02-13T19:34:36.186185908Z" level=error msg="Failed to destroy network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.186697 containerd[1471]: time="2025-02-13T19:34:36.186658973Z" level=error msg="encountered an error cleaning up failed sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.186785 containerd[1471]: time="2025-02-13T19:34:36.186755350Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.187357 kubelet[2607]: E0213 19:34:36.187009 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.187357 kubelet[2607]: E0213 19:34:36.187071 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:36.187357 kubelet[2607]: E0213 19:34:36.187090 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-jsldt" Feb 13 19:34:36.187554 kubelet[2607]: E0213 19:34:36.187137 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-jsldt_kube-system(f4864561-a999-4e48-83d2-08fa358e2d4a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-jsldt" podUID="f4864561-a999-4e48-83d2-08fa358e2d4a" Feb 13 19:34:36.189403 containerd[1471]: time="2025-02-13T19:34:36.189373746Z" level=error msg="Failed to destroy network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.189923 containerd[1471]: time="2025-02-13T19:34:36.189896227Z" level=error msg="encountered an error cleaning up failed sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.190009 containerd[1471]: time="2025-02-13T19:34:36.189982855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.190356 kubelet[2607]: E0213 19:34:36.190314 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.190400 kubelet[2607]: E0213 19:34:36.190384 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:36.190428 kubelet[2607]: E0213 19:34:36.190410 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" Feb 13 19:34:36.190498 kubelet[2607]: E0213 19:34:36.190466 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5875c56fd9-b2rnd_calico-apiserver(1aaf1ef7-2705-4815-a824-0f60456d76fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" podUID="1aaf1ef7-2705-4815-a824-0f60456d76fc" Feb 13 19:34:36.215125 systemd[1]: Started cri-containerd-1797fb4d6ac60b525d7e85feae8af05273722585d1d09d765048c0cdb3615c01.scope - libcontainer container 1797fb4d6ac60b525d7e85feae8af05273722585d1d09d765048c0cdb3615c01. Feb 13 19:34:36.220518 containerd[1471]: time="2025-02-13T19:34:36.220445941Z" level=error msg="Failed to destroy network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.220887 containerd[1471]: time="2025-02-13T19:34:36.220854361Z" level=error msg="encountered an error cleaning up failed sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.220938 containerd[1471]: time="2025-02-13T19:34:36.220920369Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.221207 kubelet[2607]: E0213 19:34:36.221156 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.221265 kubelet[2607]: E0213 19:34:36.221233 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:36.221265 kubelet[2607]: E0213 19:34:36.221258 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-szjm2" Feb 13 19:34:36.221553 kubelet[2607]: E0213 19:34:36.221300 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-szjm2_kube-system(0d54ca70-3d73-40a5-9a0e-85776bf4fb5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-szjm2" podUID="0d54ca70-3d73-40a5-9a0e-85776bf4fb5e" Feb 13 19:34:36.223241 containerd[1471]: time="2025-02-13T19:34:36.223115425Z" level=error msg="Failed to destroy network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.223907 containerd[1471]: time="2025-02-13T19:34:36.223773869Z" level=error msg="encountered an error cleaning up failed sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.223907 containerd[1471]: time="2025-02-13T19:34:36.223830319Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.224293 kubelet[2607]: E0213 19:34:36.224093 2607 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:34:36.224293 kubelet[2607]: E0213 19:34:36.224131 2607 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:36.224293 kubelet[2607]: E0213 19:34:36.224148 2607 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" Feb 13 19:34:36.224400 kubelet[2607]: E0213 19:34:36.224176 2607 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f695fb64c-rb8fz_calico-system(30182d20-c572-4a40-ab8d-be90016a3c84)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" podUID="30182d20-c572-4a40-ab8d-be90016a3c84" Feb 13 19:34:36.297616 containerd[1471]: time="2025-02-13T19:34:36.297420352Z" level=info msg="StartContainer for \"1797fb4d6ac60b525d7e85feae8af05273722585d1d09d765048c0cdb3615c01\" returns successfully" Feb 13 19:34:36.328665 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:34:36.328823 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:34:36.593906 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16-shm.mount: Deactivated successfully. Feb 13 19:34:36.594078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00-shm.mount: Deactivated successfully. Feb 13 19:34:36.594176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060-shm.mount: Deactivated successfully. Feb 13 19:34:37.096973 kubelet[2607]: I0213 19:34:37.096934 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16" Feb 13 19:34:37.097860 containerd[1471]: time="2025-02-13T19:34:37.097820993Z" level=info msg="StopPodSandbox for \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\"" Feb 13 19:34:37.098376 containerd[1471]: time="2025-02-13T19:34:37.098047240Z" level=info msg="Ensure that sandbox 858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16 in task-service has been cleanup successfully" Feb 13 19:34:37.099990 kubelet[2607]: I0213 19:34:37.099491 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060" Feb 13 19:34:37.100108 containerd[1471]: time="2025-02-13T19:34:37.099984486Z" level=info msg="StopPodSandbox for \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\"" Feb 13 19:34:37.100216 containerd[1471]: time="2025-02-13T19:34:37.100196125Z" level=info msg="Ensure that sandbox e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060 in task-service has been cleanup successfully" Feb 13 19:34:37.100920 systemd[1]: run-netns-cni\x2d9039dbb1\x2dec78\x2d51bb\x2d962b\x2d4c4871785b5a.mount: Deactivated successfully. Feb 13 19:34:37.101131 containerd[1471]: time="2025-02-13T19:34:37.101096676Z" level=info msg="TearDown network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\" successfully" Feb 13 19:34:37.101131 containerd[1471]: time="2025-02-13T19:34:37.101129489Z" level=info msg="StopPodSandbox for \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\" returns successfully" Feb 13 19:34:37.101404 containerd[1471]: time="2025-02-13T19:34:37.101296332Z" level=info msg="TearDown network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\" successfully" Feb 13 19:34:37.101404 containerd[1471]: time="2025-02-13T19:34:37.101313455Z" level=info msg="StopPodSandbox for \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\" returns successfully" Feb 13 19:34:37.101864 containerd[1471]: time="2025-02-13T19:34:37.101839302Z" level=info msg="StopPodSandbox for \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\"" Feb 13 19:34:37.102007 containerd[1471]: time="2025-02-13T19:34:37.101988210Z" level=info msg="TearDown network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" successfully" Feb 13 19:34:37.102007 containerd[1471]: time="2025-02-13T19:34:37.102005373Z" level=info msg="StopPodSandbox for \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" returns successfully" Feb 13 19:34:37.102584 containerd[1471]: time="2025-02-13T19:34:37.102356141Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\"" Feb 13 19:34:37.102584 containerd[1471]: time="2025-02-13T19:34:37.102456405Z" level=info msg="TearDown network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" successfully" Feb 13 19:34:37.102584 containerd[1471]: time="2025-02-13T19:34:37.102471153Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" returns successfully" Feb 13 19:34:37.103054 containerd[1471]: time="2025-02-13T19:34:37.103003523Z" level=info msg="StopPodSandbox for \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\"" Feb 13 19:34:37.103151 containerd[1471]: time="2025-02-13T19:34:37.103123845Z" level=info msg="TearDown network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" successfully" Feb 13 19:34:37.103151 containerd[1471]: time="2025-02-13T19:34:37.103136309Z" level=info msg="StopPodSandbox for \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" returns successfully" Feb 13 19:34:37.103621 containerd[1471]: time="2025-02-13T19:34:37.103464173Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\"" Feb 13 19:34:37.103621 containerd[1471]: time="2025-02-13T19:34:37.103587842Z" level=info msg="TearDown network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" successfully" Feb 13 19:34:37.103621 containerd[1471]: time="2025-02-13T19:34:37.103602159Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" returns successfully" Feb 13 19:34:37.104393 containerd[1471]: time="2025-02-13T19:34:37.103751538Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" Feb 13 19:34:37.104393 containerd[1471]: time="2025-02-13T19:34:37.103841933Z" level=info msg="TearDown network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" successfully" Feb 13 19:34:37.104393 containerd[1471]: time="2025-02-13T19:34:37.103853325Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" returns successfully" Feb 13 19:34:37.105567 containerd[1471]: time="2025-02-13T19:34:37.104542758Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" Feb 13 19:34:37.105567 containerd[1471]: time="2025-02-13T19:34:37.104633053Z" level=info msg="TearDown network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" successfully" Feb 13 19:34:37.105567 containerd[1471]: time="2025-02-13T19:34:37.104645146Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" returns successfully" Feb 13 19:34:37.105567 containerd[1471]: time="2025-02-13T19:34:37.105286927Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:34:37.105567 containerd[1471]: time="2025-02-13T19:34:37.105364065Z" level=info msg="TearDown network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" successfully" Feb 13 19:34:37.105567 containerd[1471]: time="2025-02-13T19:34:37.105372913Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" returns successfully" Feb 13 19:34:37.105794 systemd[1]: run-netns-cni\x2da7d46fca\x2d915b\x2d7401\x2df829\x2ddebb0762e950.mount: Deactivated successfully. Feb 13 19:34:37.106375 containerd[1471]: time="2025-02-13T19:34:37.106215482Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:34:37.106631 containerd[1471]: time="2025-02-13T19:34:37.106448021Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:34:37.106678 containerd[1471]: time="2025-02-13T19:34:37.106665963Z" level=info msg="TearDown network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" successfully" Feb 13 19:34:37.106678 containerd[1471]: time="2025-02-13T19:34:37.106677065Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" returns successfully" Feb 13 19:34:37.106924 containerd[1471]: time="2025-02-13T19:34:37.106898261Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:34:37.106924 containerd[1471]: time="2025-02-13T19:34:37.106910796Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:34:37.107312 kubelet[2607]: I0213 19:34:37.107267 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00" Feb 13 19:34:37.107381 containerd[1471]: time="2025-02-13T19:34:37.107322131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:34:37.107897 containerd[1471]: time="2025-02-13T19:34:37.107861203Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:34:37.108804 containerd[1471]: time="2025-02-13T19:34:37.107997487Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:34:37.108804 containerd[1471]: time="2025-02-13T19:34:37.108013588Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:34:37.108804 containerd[1471]: time="2025-02-13T19:34:37.108495480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:6,}" Feb 13 19:34:37.108804 containerd[1471]: time="2025-02-13T19:34:37.108541950Z" level=info msg="StopPodSandbox for \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\"" Feb 13 19:34:37.108804 containerd[1471]: time="2025-02-13T19:34:37.108783076Z" level=info msg="Ensure that sandbox cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00 in task-service has been cleanup successfully" Feb 13 19:34:37.109112 kubelet[2607]: E0213 19:34:37.108186 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:37.109621 containerd[1471]: time="2025-02-13T19:34:37.109266881Z" level=info msg="TearDown network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\" successfully" Feb 13 19:34:37.109621 containerd[1471]: time="2025-02-13T19:34:37.109314492Z" level=info msg="StopPodSandbox for \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\" returns successfully" Feb 13 19:34:37.110586 containerd[1471]: time="2025-02-13T19:34:37.110560612Z" level=info msg="StopPodSandbox for \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\"" Feb 13 19:34:37.110833 containerd[1471]: time="2025-02-13T19:34:37.110810926Z" level=info msg="TearDown network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" successfully" Feb 13 19:34:37.110908 containerd[1471]: time="2025-02-13T19:34:37.110890450Z" level=info msg="StopPodSandbox for \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" returns successfully" Feb 13 19:34:37.111627 kubelet[2607]: E0213 19:34:37.111594 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:37.111706 systemd[1]: run-netns-cni\x2d8b2ba3e2\x2d4f10\x2de0c7\x2d69f6\x2d948318c3943e.mount: Deactivated successfully. Feb 13 19:34:37.114248 containerd[1471]: time="2025-02-13T19:34:37.114216159Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\"" Feb 13 19:34:37.114348 containerd[1471]: time="2025-02-13T19:34:37.114322655Z" level=info msg="TearDown network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" successfully" Feb 13 19:34:37.114348 containerd[1471]: time="2025-02-13T19:34:37.114335259Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" returns successfully" Feb 13 19:34:37.114816 containerd[1471]: time="2025-02-13T19:34:37.114796140Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" Feb 13 19:34:37.115001 containerd[1471]: time="2025-02-13T19:34:37.114984183Z" level=info msg="TearDown network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" successfully" Feb 13 19:34:37.115116 containerd[1471]: time="2025-02-13T19:34:37.115060761Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" returns successfully" Feb 13 19:34:37.115352 containerd[1471]: time="2025-02-13T19:34:37.115325213Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:34:37.115528 containerd[1471]: time="2025-02-13T19:34:37.115421789Z" level=info msg="TearDown network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" successfully" Feb 13 19:34:37.115528 containerd[1471]: time="2025-02-13T19:34:37.115440144Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" returns successfully" Feb 13 19:34:37.117599 containerd[1471]: time="2025-02-13T19:34:37.117122708Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:34:37.117599 containerd[1471]: time="2025-02-13T19:34:37.117214254Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:34:37.117599 containerd[1471]: time="2025-02-13T19:34:37.117223533Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:34:37.119086 kubelet[2607]: E0213 19:34:37.117569 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:37.119176 containerd[1471]: time="2025-02-13T19:34:37.118117411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:6,}" Feb 13 19:34:37.119745 kubelet[2607]: I0213 19:34:37.119374 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2" Feb 13 19:34:37.120152 containerd[1471]: time="2025-02-13T19:34:37.120114100Z" level=info msg="StopPodSandbox for \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\"" Feb 13 19:34:37.122120 containerd[1471]: time="2025-02-13T19:34:37.122085713Z" level=info msg="Ensure that sandbox 031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2 in task-service has been cleanup successfully" Feb 13 19:34:37.123769 kubelet[2607]: I0213 19:34:37.123718 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc" Feb 13 19:34:37.124122 containerd[1471]: time="2025-02-13T19:34:37.124102912Z" level=info msg="TearDown network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\" successfully" Feb 13 19:34:37.124467 containerd[1471]: time="2025-02-13T19:34:37.124248513Z" level=info msg="StopPodSandbox for \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\" returns successfully" Feb 13 19:34:37.124844 systemd[1]: run-netns-cni\x2da8007c24\x2d58ed\x2d3e00\x2ddb16\x2d02dbcd69a18f.mount: Deactivated successfully. Feb 13 19:34:37.125550 containerd[1471]: time="2025-02-13T19:34:37.124292278Z" level=info msg="StopPodSandbox for \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\"" Feb 13 19:34:37.125740 containerd[1471]: time="2025-02-13T19:34:37.125435879Z" level=info msg="StopPodSandbox for \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\"" Feb 13 19:34:37.125740 containerd[1471]: time="2025-02-13T19:34:37.125685030Z" level=info msg="Ensure that sandbox f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc in task-service has been cleanup successfully" Feb 13 19:34:37.125824 containerd[1471]: time="2025-02-13T19:34:37.125690511Z" level=info msg="TearDown network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" successfully" Feb 13 19:34:37.126241 containerd[1471]: time="2025-02-13T19:34:37.125875859Z" level=info msg="StopPodSandbox for \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" returns successfully" Feb 13 19:34:37.126241 containerd[1471]: time="2025-02-13T19:34:37.126120101Z" level=info msg="TearDown network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\" successfully" Feb 13 19:34:37.126241 containerd[1471]: time="2025-02-13T19:34:37.126137615Z" level=info msg="StopPodSandbox for \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\" returns successfully" Feb 13 19:34:37.127150 containerd[1471]: time="2025-02-13T19:34:37.126746713Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\"" Feb 13 19:34:37.127150 containerd[1471]: time="2025-02-13T19:34:37.126848149Z" level=info msg="TearDown network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" successfully" Feb 13 19:34:37.127150 containerd[1471]: time="2025-02-13T19:34:37.126860242Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" returns successfully" Feb 13 19:34:37.127150 containerd[1471]: time="2025-02-13T19:34:37.126911351Z" level=info msg="StopPodSandbox for \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\"" Feb 13 19:34:37.127150 containerd[1471]: time="2025-02-13T19:34:37.127038618Z" level=info msg="TearDown network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" successfully" Feb 13 19:34:37.127150 containerd[1471]: time="2025-02-13T19:34:37.127051001Z" level=info msg="StopPodSandbox for \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" returns successfully" Feb 13 19:34:37.128745 containerd[1471]: time="2025-02-13T19:34:37.128371634Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\"" Feb 13 19:34:37.128745 containerd[1471]: time="2025-02-13T19:34:37.128518548Z" level=info msg="TearDown network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" successfully" Feb 13 19:34:37.128745 containerd[1471]: time="2025-02-13T19:34:37.128533537Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" returns successfully" Feb 13 19:34:37.128745 containerd[1471]: time="2025-02-13T19:34:37.128605968Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" Feb 13 19:34:37.128745 containerd[1471]: time="2025-02-13T19:34:37.128671464Z" level=info msg="TearDown network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" successfully" Feb 13 19:34:37.128745 containerd[1471]: time="2025-02-13T19:34:37.128680712Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" returns successfully" Feb 13 19:34:37.129124 containerd[1471]: time="2025-02-13T19:34:37.129008475Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" Feb 13 19:34:37.129124 containerd[1471]: time="2025-02-13T19:34:37.129110673Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:34:37.129241 containerd[1471]: time="2025-02-13T19:34:37.129191318Z" level=info msg="TearDown network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" successfully" Feb 13 19:34:37.129241 containerd[1471]: time="2025-02-13T19:34:37.129202450Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" returns successfully" Feb 13 19:34:37.129928 containerd[1471]: time="2025-02-13T19:34:37.129903396Z" level=info msg="TearDown network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" successfully" Feb 13 19:34:37.129928 containerd[1471]: time="2025-02-13T19:34:37.129921471Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" returns successfully" Feb 13 19:34:37.130090 containerd[1471]: time="2025-02-13T19:34:37.130069035Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:34:37.130440 containerd[1471]: time="2025-02-13T19:34:37.130417911Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:34:37.130544 containerd[1471]: time="2025-02-13T19:34:37.130508596Z" level=info msg="TearDown network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" successfully" Feb 13 19:34:37.130544 containerd[1471]: time="2025-02-13T19:34:37.130518354Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" returns successfully" Feb 13 19:34:37.130848 containerd[1471]: time="2025-02-13T19:34:37.130760452Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:34:37.130848 containerd[1471]: time="2025-02-13T19:34:37.130846528Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:34:37.131004 containerd[1471]: time="2025-02-13T19:34:37.130856518Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:34:37.131035 containerd[1471]: time="2025-02-13T19:34:37.131017699Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:34:37.131035 containerd[1471]: time="2025-02-13T19:34:37.131031516Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:34:37.132103 containerd[1471]: time="2025-02-13T19:34:37.132071416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:6,}" Feb 13 19:34:37.132714 containerd[1471]: time="2025-02-13T19:34:37.132130711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:34:37.132767 kubelet[2607]: I0213 19:34:37.132609 2607 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd" Feb 13 19:34:37.133076 containerd[1471]: time="2025-02-13T19:34:37.133053495Z" level=info msg="StopPodSandbox for \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\"" Feb 13 19:34:37.133319 containerd[1471]: time="2025-02-13T19:34:37.133299331Z" level=info msg="Ensure that sandbox 86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd in task-service has been cleanup successfully" Feb 13 19:34:37.133826 containerd[1471]: time="2025-02-13T19:34:37.133798966Z" level=info msg="TearDown network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\" successfully" Feb 13 19:34:37.133903 containerd[1471]: time="2025-02-13T19:34:37.133883289Z" level=info msg="StopPodSandbox for \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\" returns successfully" Feb 13 19:34:37.134385 containerd[1471]: time="2025-02-13T19:34:37.134260889Z" level=info msg="StopPodSandbox for \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\"" Feb 13 19:34:37.134385 containerd[1471]: time="2025-02-13T19:34:37.134338600Z" level=info msg="TearDown network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" successfully" Feb 13 19:34:37.134385 containerd[1471]: time="2025-02-13T19:34:37.134347897Z" level=info msg="StopPodSandbox for \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" returns successfully" Feb 13 19:34:37.135078 containerd[1471]: time="2025-02-13T19:34:37.134928270Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\"" Feb 13 19:34:37.135078 containerd[1471]: time="2025-02-13T19:34:37.135024646Z" level=info msg="TearDown network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" successfully" Feb 13 19:34:37.135078 containerd[1471]: time="2025-02-13T19:34:37.135034044Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" returns successfully" Feb 13 19:34:37.135892 containerd[1471]: time="2025-02-13T19:34:37.135841766Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" Feb 13 19:34:37.136124 containerd[1471]: time="2025-02-13T19:34:37.136035500Z" level=info msg="TearDown network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" successfully" Feb 13 19:34:37.136124 containerd[1471]: time="2025-02-13T19:34:37.136075888Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" returns successfully" Feb 13 19:34:37.136617 containerd[1471]: time="2025-02-13T19:34:37.136585453Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:34:37.136795 containerd[1471]: time="2025-02-13T19:34:37.136675207Z" level=info msg="TearDown network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" successfully" Feb 13 19:34:37.136795 containerd[1471]: time="2025-02-13T19:34:37.136691889Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" returns successfully" Feb 13 19:34:37.137638 containerd[1471]: time="2025-02-13T19:34:37.137245519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:5,}" Feb 13 19:34:37.141184 kubelet[2607]: I0213 19:34:37.140430 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-97z4f" podStartSLOduration=2.309033282 podStartE2EDuration="22.140409657s" podCreationTimestamp="2025-02-13 19:34:15 +0000 UTC" firstStartedPulling="2025-02-13 19:34:16.226458695 +0000 UTC m=+13.685276596" lastFinishedPulling="2025-02-13 19:34:36.05783507 +0000 UTC m=+33.516652971" observedRunningTime="2025-02-13 19:34:37.139921442 +0000 UTC m=+34.598739353" watchObservedRunningTime="2025-02-13 19:34:37.140409657 +0000 UTC m=+34.599227558" Feb 13 19:34:37.443166 systemd-networkd[1402]: cali4c0a1993edc: Link UP Feb 13 19:34:37.444007 systemd-networkd[1402]: cali4c0a1993edc: Gained carrier Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.285 [INFO][4778] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.307 [INFO][4778] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0 calico-kube-controllers-6f695fb64c- calico-system 30182d20-c572-4a40-ab8d-be90016a3c84 697 0 2025-02-13 19:34:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f695fb64c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f695fb64c-rb8fz eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c0a1993edc [] []}} ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.307 [INFO][4778] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.377 [INFO][4846] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" HandleID="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Workload="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4846] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" HandleID="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Workload="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001bc2e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f695fb64c-rb8fz", "timestamp":"2025-02-13 19:34:37.377007017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4846] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.400 [INFO][4846] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.406 [INFO][4846] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.412 [INFO][4846] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.414 [INFO][4846] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.416 [INFO][4846] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.416 [INFO][4846] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.417 [INFO][4846] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236 Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.421 [INFO][4846] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.425 [INFO][4846] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.425 [INFO][4846] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" host="localhost" Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.425 [INFO][4846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:37.547532 containerd[1471]: 2025-02-13 19:34:37.425 [INFO][4846] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" HandleID="k8s-pod-network.591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Workload="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" Feb 13 19:34:37.548261 containerd[1471]: 2025-02-13 19:34:37.429 [INFO][4778] cni-plugin/k8s.go 386: Populated endpoint ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0", GenerateName:"calico-kube-controllers-6f695fb64c-", Namespace:"calico-system", SelfLink:"", UID:"30182d20-c572-4a40-ab8d-be90016a3c84", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f695fb64c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f695fb64c-rb8fz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c0a1993edc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:37.548261 containerd[1471]: 2025-02-13 19:34:37.429 [INFO][4778] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" Feb 13 19:34:37.548261 containerd[1471]: 2025-02-13 19:34:37.429 [INFO][4778] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c0a1993edc ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" Feb 13 19:34:37.548261 containerd[1471]: 2025-02-13 19:34:37.445 [INFO][4778] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" Feb 13 19:34:37.548261 containerd[1471]: 2025-02-13 19:34:37.445 [INFO][4778] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0", GenerateName:"calico-kube-controllers-6f695fb64c-", Namespace:"calico-system", SelfLink:"", UID:"30182d20-c572-4a40-ab8d-be90016a3c84", ResourceVersion:"697", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f695fb64c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236", Pod:"calico-kube-controllers-6f695fb64c-rb8fz", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c0a1993edc", MAC:"42:f4:1f:4a:5c:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:37.548261 containerd[1471]: 2025-02-13 19:34:37.544 [INFO][4778] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236" Namespace="calico-system" Pod="calico-kube-controllers-6f695fb64c-rb8fz" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f695fb64c--rb8fz-eth0" Feb 13 19:34:37.596105 systemd[1]: run-netns-cni\x2dedc474d0\x2db35d\x2d4853\x2d7284\x2dfc33b58748cc.mount: Deactivated successfully. Feb 13 19:34:37.596221 systemd[1]: run-netns-cni\x2de5694d25\x2d1b33\x2d6d83\x2d72ab\x2d7babcfba8ff3.mount: Deactivated successfully. Feb 13 19:34:37.671145 containerd[1471]: time="2025-02-13T19:34:37.670803916Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:37.671145 containerd[1471]: time="2025-02-13T19:34:37.670894913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:37.671145 containerd[1471]: time="2025-02-13T19:34:37.670911245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:37.671145 containerd[1471]: time="2025-02-13T19:34:37.671047056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:37.707648 systemd[1]: Started cri-containerd-591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236.scope - libcontainer container 591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236. Feb 13 19:34:37.722082 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:37.749420 containerd[1471]: time="2025-02-13T19:34:37.749371324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f695fb64c-rb8fz,Uid:30182d20-c572-4a40-ab8d-be90016a3c84,Namespace:calico-system,Attempt:6,} returns sandbox id \"591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236\"" Feb 13 19:34:37.752368 containerd[1471]: time="2025-02-13T19:34:37.752347357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:34:37.818174 systemd-networkd[1402]: cali267d0eef4b0: Link UP Feb 13 19:34:37.819261 systemd-networkd[1402]: cali267d0eef4b0: Gained carrier Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.204 [INFO][4733] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.260 [INFO][4733] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--jsldt-eth0 coredns-668d6bf9bc- kube-system f4864561-a999-4e48-83d2-08fa358e2d4a 703 0 2025-02-13 19:34:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-jsldt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali267d0eef4b0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.262 [INFO][4733] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.376 [INFO][4823] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" HandleID="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Workload="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4823] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" HandleID="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Workload="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd130), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-jsldt", "timestamp":"2025-02-13 19:34:37.376198564 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.425 [INFO][4823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.426 [INFO][4823] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.543 [INFO][4823] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.550 [INFO][4823] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.582 [INFO][4823] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.584 [INFO][4823] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.586 [INFO][4823] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.586 [INFO][4823] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.602 [INFO][4823] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0 Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.782 [INFO][4823] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.811 [INFO][4823] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.811 [INFO][4823] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" host="localhost" Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.811 [INFO][4823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:37.880449 containerd[1471]: 2025-02-13 19:34:37.811 [INFO][4823] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" HandleID="k8s-pod-network.17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Workload="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" Feb 13 19:34:37.881251 containerd[1471]: 2025-02-13 19:34:37.815 [INFO][4733] cni-plugin/k8s.go 386: Populated endpoint ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jsldt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f4864561-a999-4e48-83d2-08fa358e2d4a", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-jsldt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali267d0eef4b0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:37.881251 containerd[1471]: 2025-02-13 19:34:37.815 [INFO][4733] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" Feb 13 19:34:37.881251 containerd[1471]: 2025-02-13 19:34:37.815 [INFO][4733] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali267d0eef4b0 ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" Feb 13 19:34:37.881251 containerd[1471]: 2025-02-13 19:34:37.818 [INFO][4733] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" Feb 13 19:34:37.881251 containerd[1471]: 2025-02-13 19:34:37.819 [INFO][4733] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--jsldt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f4864561-a999-4e48-83d2-08fa358e2d4a", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0", Pod:"coredns-668d6bf9bc-jsldt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali267d0eef4b0", MAC:"92:5a:98:18:72:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:37.881251 containerd[1471]: 2025-02-13 19:34:37.876 [INFO][4733] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0" Namespace="kube-system" Pod="coredns-668d6bf9bc-jsldt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--jsldt-eth0" Feb 13 19:34:37.910304 containerd[1471]: time="2025-02-13T19:34:37.909943062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:37.912989 containerd[1471]: time="2025-02-13T19:34:37.912253649Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:37.912989 containerd[1471]: time="2025-02-13T19:34:37.912287704Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:37.913159 containerd[1471]: time="2025-02-13T19:34:37.913070377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:37.917667 systemd-networkd[1402]: calie8714484c67: Link UP Feb 13 19:34:37.919081 systemd-networkd[1402]: calie8714484c67: Gained carrier Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.230 [INFO][4749] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.259 [INFO][4749] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0 calico-apiserver-5875c56fd9- calico-apiserver 54b6b75e-6c3e-4c6f-9680-59e42f2a9685 699 0 2025-02-13 19:34:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5875c56fd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5875c56fd9-cljgm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie8714484c67 [] []}} ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.259 [INFO][4749] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.376 [INFO][4820] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" HandleID="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Workload="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4820] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" HandleID="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Workload="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002f4d40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5875c56fd9-cljgm", "timestamp":"2025-02-13 19:34:37.376368483 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.394 [INFO][4820] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.811 [INFO][4820] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.811 [INFO][4820] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.814 [INFO][4820] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.879 [INFO][4820] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.888 [INFO][4820] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.890 [INFO][4820] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.893 [INFO][4820] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.893 [INFO][4820] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.895 [INFO][4820] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425 Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.901 [INFO][4820] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.909 [INFO][4820] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.909 [INFO][4820] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" host="localhost" Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.909 [INFO][4820] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:37.939374 containerd[1471]: 2025-02-13 19:34:37.909 [INFO][4820] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" HandleID="k8s-pod-network.293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Workload="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" Feb 13 19:34:37.939946 containerd[1471]: 2025-02-13 19:34:37.914 [INFO][4749] cni-plugin/k8s.go 386: Populated endpoint ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0", GenerateName:"calico-apiserver-5875c56fd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"54b6b75e-6c3e-4c6f-9680-59e42f2a9685", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5875c56fd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5875c56fd9-cljgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8714484c67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:37.939946 containerd[1471]: 2025-02-13 19:34:37.914 [INFO][4749] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" Feb 13 19:34:37.939946 containerd[1471]: 2025-02-13 19:34:37.914 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8714484c67 ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" Feb 13 19:34:37.939946 containerd[1471]: 2025-02-13 19:34:37.918 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" Feb 13 19:34:37.939946 containerd[1471]: 2025-02-13 19:34:37.919 [INFO][4749] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0", GenerateName:"calico-apiserver-5875c56fd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"54b6b75e-6c3e-4c6f-9680-59e42f2a9685", ResourceVersion:"699", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5875c56fd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425", Pod:"calico-apiserver-5875c56fd9-cljgm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie8714484c67", MAC:"6a:5c:38:e7:ba:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:37.939946 containerd[1471]: 2025-02-13 19:34:37.930 [INFO][4749] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-cljgm" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--cljgm-eth0" Feb 13 19:34:37.954445 systemd[1]: Started cri-containerd-17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0.scope - libcontainer container 17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0. Feb 13 19:34:37.983122 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:37.985340 containerd[1471]: time="2025-02-13T19:34:37.984866975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:37.985340 containerd[1471]: time="2025-02-13T19:34:37.984978411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:37.985340 containerd[1471]: time="2025-02-13T19:34:37.984998119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:37.985340 containerd[1471]: time="2025-02-13T19:34:37.985113752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:38.029220 systemd[1]: Started cri-containerd-293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425.scope - libcontainer container 293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425. Feb 13 19:34:38.058031 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:38.084788 containerd[1471]: time="2025-02-13T19:34:38.084508963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-jsldt,Uid:f4864561-a999-4e48-83d2-08fa358e2d4a,Namespace:kube-system,Attempt:6,} returns sandbox id \"17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0\"" Feb 13 19:34:38.086134 systemd-networkd[1402]: cali0bd5a291cd3: Link UP Feb 13 19:34:38.088067 systemd-networkd[1402]: cali0bd5a291cd3: Gained carrier Feb 13 19:34:38.089550 kubelet[2607]: E0213 19:34:38.089165 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:38.100528 containerd[1471]: time="2025-02-13T19:34:38.100352539Z" level=info msg="CreateContainer within sandbox \"17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.294 [INFO][4766] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.308 [INFO][4766] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--szjm2-eth0 coredns-668d6bf9bc- kube-system 0d54ca70-3d73-40a5-9a0e-85776bf4fb5e 698 0 2025-02-13 19:34:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-szjm2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0bd5a291cd3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.308 [INFO][4766] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.376 [INFO][4845] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" HandleID="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Workload="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.395 [INFO][4845] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" HandleID="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Workload="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033ca10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-szjm2", "timestamp":"2025-02-13 19:34:37.376510838 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.395 [INFO][4845] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.909 [INFO][4845] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.909 [INFO][4845] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.916 [INFO][4845] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.981 [INFO][4845] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.996 [INFO][4845] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:37.998 [INFO][4845] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.002 [INFO][4845] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.002 [INFO][4845] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.003 [INFO][4845] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74 Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.053 [INFO][4845] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.070 [INFO][4845] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.071 [INFO][4845] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" host="localhost" Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.071 [INFO][4845] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:38.148422 containerd[1471]: 2025-02-13 19:34:38.071 [INFO][4845] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" HandleID="k8s-pod-network.17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Workload="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" Feb 13 19:34:38.149119 containerd[1471]: 2025-02-13 19:34:38.076 [INFO][4766] cni-plugin/k8s.go 386: Populated endpoint ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--szjm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0d54ca70-3d73-40a5-9a0e-85776bf4fb5e", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-szjm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bd5a291cd3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:38.149119 containerd[1471]: 2025-02-13 19:34:38.076 [INFO][4766] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" Feb 13 19:34:38.149119 containerd[1471]: 2025-02-13 19:34:38.077 [INFO][4766] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bd5a291cd3 ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" Feb 13 19:34:38.149119 containerd[1471]: 2025-02-13 19:34:38.087 [INFO][4766] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" Feb 13 19:34:38.149119 containerd[1471]: 2025-02-13 19:34:38.089 [INFO][4766] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--szjm2-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0d54ca70-3d73-40a5-9a0e-85776bf4fb5e", ResourceVersion:"698", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74", Pod:"coredns-668d6bf9bc-szjm2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bd5a291cd3", MAC:"ae:89:ec:17:fa:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:38.149119 containerd[1471]: 2025-02-13 19:34:38.130 [INFO][4766] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74" Namespace="kube-system" Pod="coredns-668d6bf9bc-szjm2" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--szjm2-eth0" Feb 13 19:34:38.161040 containerd[1471]: time="2025-02-13T19:34:38.158035460Z" level=info msg="CreateContainer within sandbox \"17fecfb7a3dbaec25c721d2cf1cb6a968075271287fd4dfdd8f619fd7ac38ee0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25825d82604538630352e404d71167314187a92e425b60a48d9a11f5f1812b71\"" Feb 13 19:34:38.165394 containerd[1471]: time="2025-02-13T19:34:38.164357542Z" level=info msg="StartContainer for \"25825d82604538630352e404d71167314187a92e425b60a48d9a11f5f1812b71\"" Feb 13 19:34:38.176259 kubelet[2607]: E0213 19:34:38.174545 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:38.194093 containerd[1471]: time="2025-02-13T19:34:38.193758128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-cljgm,Uid:54b6b75e-6c3e-4c6f-9680-59e42f2a9685,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425\"" Feb 13 19:34:38.218533 systemd[1]: Started cri-containerd-25825d82604538630352e404d71167314187a92e425b60a48d9a11f5f1812b71.scope - libcontainer container 25825d82604538630352e404d71167314187a92e425b60a48d9a11f5f1812b71. Feb 13 19:34:38.226470 systemd-networkd[1402]: cali242bdde1f60: Link UP Feb 13 19:34:38.228149 systemd-networkd[1402]: cali242bdde1f60: Gained carrier Feb 13 19:34:38.266209 containerd[1471]: time="2025-02-13T19:34:38.264821997Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:37.278 [INFO][4783] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:37.295 [INFO][4783] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0 calico-apiserver-5875c56fd9- calico-apiserver 1aaf1ef7-2705-4815-a824-0f60456d76fc 695 0 2025-02-13 19:34:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5875c56fd9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5875c56fd9-b2rnd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali242bdde1f60 [] []}} ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:37.295 [INFO][4783] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:37.377 [INFO][4834] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" HandleID="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Workload="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:37.395 [INFO][4834] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" HandleID="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Workload="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051c60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5875c56fd9-b2rnd", "timestamp":"2025-02-13 19:34:37.377066572 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:37.395 [INFO][4834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.072 [INFO][4834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.072 [INFO][4834] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.090 [INFO][4834] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.123 [INFO][4834] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.157 [INFO][4834] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.162 [INFO][4834] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.170 [INFO][4834] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.170 [INFO][4834] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.177 [INFO][4834] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.191 [INFO][4834] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.203 [INFO][4834] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.203 [INFO][4834] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" host="localhost" Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.204 [INFO][4834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:38.266781 containerd[1471]: 2025-02-13 19:34:38.204 [INFO][4834] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" HandleID="k8s-pod-network.d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Workload="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" Feb 13 19:34:38.267931 containerd[1471]: 2025-02-13 19:34:38.216 [INFO][4783] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0", GenerateName:"calico-apiserver-5875c56fd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aaf1ef7-2705-4815-a824-0f60456d76fc", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5875c56fd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5875c56fd9-b2rnd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali242bdde1f60", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:38.267931 containerd[1471]: 2025-02-13 19:34:38.216 [INFO][4783] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" Feb 13 19:34:38.267931 containerd[1471]: 2025-02-13 19:34:38.216 [INFO][4783] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali242bdde1f60 ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" Feb 13 19:34:38.267931 containerd[1471]: 2025-02-13 19:34:38.229 [INFO][4783] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" Feb 13 19:34:38.267931 containerd[1471]: 2025-02-13 19:34:38.229 [INFO][4783] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0", GenerateName:"calico-apiserver-5875c56fd9-", Namespace:"calico-apiserver", SelfLink:"", UID:"1aaf1ef7-2705-4815-a824-0f60456d76fc", ResourceVersion:"695", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5875c56fd9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba", Pod:"calico-apiserver-5875c56fd9-b2rnd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali242bdde1f60", MAC:"ce:df:e8:dd:8d:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:38.267931 containerd[1471]: 2025-02-13 19:34:38.257 [INFO][4783] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba" Namespace="calico-apiserver" Pod="calico-apiserver-5875c56fd9-b2rnd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5875c56fd9--b2rnd-eth0" Feb 13 19:34:38.272602 containerd[1471]: time="2025-02-13T19:34:38.272362323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:38.276432 containerd[1471]: time="2025-02-13T19:34:38.273764852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:38.276432 containerd[1471]: time="2025-02-13T19:34:38.273984657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:38.285762 containerd[1471]: time="2025-02-13T19:34:38.285704276Z" level=info msg="StartContainer for \"25825d82604538630352e404d71167314187a92e425b60a48d9a11f5f1812b71\" returns successfully" Feb 13 19:34:38.304705 systemd[1]: Started cri-containerd-17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74.scope - libcontainer container 17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74. Feb 13 19:34:38.318741 containerd[1471]: time="2025-02-13T19:34:38.318343808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:38.318741 containerd[1471]: time="2025-02-13T19:34:38.318604803Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:38.318878 systemd-networkd[1402]: calidc50746062b: Link UP Feb 13 19:34:38.319423 containerd[1471]: time="2025-02-13T19:34:38.319379849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:38.319839 systemd-networkd[1402]: calidc50746062b: Gained carrier Feb 13 19:34:38.322092 containerd[1471]: time="2025-02-13T19:34:38.321006782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:38.327359 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:37.281 [INFO][4802] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:37.292 [INFO][4802] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qqgd5-eth0 csi-node-driver- calico-system f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed 597 0 2025-02-13 19:34:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qqgd5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidc50746062b [] []}} ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:37.292 [INFO][4802] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-eth0" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:37.376 [INFO][4840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" HandleID="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Workload="localhost-k8s-csi--node--driver--qqgd5-eth0" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:37.401 [INFO][4840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" HandleID="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Workload="localhost-k8s-csi--node--driver--qqgd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000260300), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qqgd5", "timestamp":"2025-02-13 19:34:37.376166392 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:37.401 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.204 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.204 [INFO][4840] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.209 [INFO][4840] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.244 [INFO][4840] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.281 [INFO][4840] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.283 [INFO][4840] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.288 [INFO][4840] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.288 [INFO][4840] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.292 [INFO][4840] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7 Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.298 [INFO][4840] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.308 [INFO][4840] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.308 [INFO][4840] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" host="localhost" Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.308 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:34:38.345785 containerd[1471]: 2025-02-13 19:34:38.308 [INFO][4840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" HandleID="k8s-pod-network.dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Workload="localhost-k8s-csi--node--driver--qqgd5-eth0" Feb 13 19:34:38.346717 containerd[1471]: 2025-02-13 19:34:38.316 [INFO][4802] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqgd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qqgd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc50746062b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:38.346717 containerd[1471]: 2025-02-13 19:34:38.316 [INFO][4802] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-eth0" Feb 13 19:34:38.346717 containerd[1471]: 2025-02-13 19:34:38.316 [INFO][4802] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc50746062b ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-eth0" Feb 13 19:34:38.346717 containerd[1471]: 2025-02-13 19:34:38.320 [INFO][4802] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-eth0" Feb 13 19:34:38.346717 containerd[1471]: 2025-02-13 19:34:38.322 [INFO][4802] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qqgd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed", ResourceVersion:"597", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7", Pod:"csi-node-driver-qqgd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidc50746062b", MAC:"a6:ba:40:07:74:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:34:38.346717 containerd[1471]: 2025-02-13 19:34:38.340 [INFO][4802] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7" Namespace="calico-system" Pod="csi-node-driver-qqgd5" WorkloadEndpoint="localhost-k8s-csi--node--driver--qqgd5-eth0" Feb 13 19:34:38.349171 systemd[1]: Started cri-containerd-d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba.scope - libcontainer container d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba. Feb 13 19:34:38.369514 containerd[1471]: time="2025-02-13T19:34:38.369450484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-szjm2,Uid:0d54ca70-3d73-40a5-9a0e-85776bf4fb5e,Namespace:kube-system,Attempt:6,} returns sandbox id \"17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74\"" Feb 13 19:34:38.370650 kubelet[2607]: E0213 19:34:38.370522 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:38.371621 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:38.373995 containerd[1471]: time="2025-02-13T19:34:38.373946249Z" level=info msg="CreateContainer within sandbox \"17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:34:38.388081 containerd[1471]: time="2025-02-13T19:34:38.387855549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:34:38.388081 containerd[1471]: time="2025-02-13T19:34:38.387935914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:34:38.388081 containerd[1471]: time="2025-02-13T19:34:38.387948178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:38.388081 containerd[1471]: time="2025-02-13T19:34:38.388040536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:34:38.398088 containerd[1471]: time="2025-02-13T19:34:38.397917636Z" level=info msg="CreateContainer within sandbox \"17a3ac5c29a08804d34f60fe3e2315d825903305426c957301d2d50fd3228c74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8aeb69743116730da4020a167c529e2be4822ea9f54ca2132b932ff7a162e2bc\"" Feb 13 19:34:38.398894 containerd[1471]: time="2025-02-13T19:34:38.398850619Z" level=info msg="StartContainer for \"8aeb69743116730da4020a167c529e2be4822ea9f54ca2132b932ff7a162e2bc\"" Feb 13 19:34:38.403268 containerd[1471]: time="2025-02-13T19:34:38.402854363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5875c56fd9-b2rnd,Uid:1aaf1ef7-2705-4815-a824-0f60456d76fc,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba\"" Feb 13 19:34:38.411422 systemd[1]: Started cri-containerd-dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7.scope - libcontainer container dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7. Feb 13 19:34:38.428609 systemd-resolved[1333]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:34:38.438163 systemd[1]: Started cri-containerd-8aeb69743116730da4020a167c529e2be4822ea9f54ca2132b932ff7a162e2bc.scope - libcontainer container 8aeb69743116730da4020a167c529e2be4822ea9f54ca2132b932ff7a162e2bc. Feb 13 19:34:38.449126 containerd[1471]: time="2025-02-13T19:34:38.449048730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qqgd5,Uid:f2dd1bcc-42fa-437c-a3ad-b39a937ac4ed,Namespace:calico-system,Attempt:5,} returns sandbox id \"dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7\"" Feb 13 19:34:38.482716 containerd[1471]: time="2025-02-13T19:34:38.482567531Z" level=info msg="StartContainer for \"8aeb69743116730da4020a167c529e2be4822ea9f54ca2132b932ff7a162e2bc\" returns successfully" Feb 13 19:34:38.663039 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:36790.service - OpenSSH per-connection server daemon (10.0.0.1:36790). Feb 13 19:34:38.726316 sshd[5384]: Accepted publickey for core from 10.0.0.1 port 36790 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:38.741280 sshd-session[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:38.760518 systemd-logind[1449]: New session 10 of user core. Feb 13 19:34:38.769141 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:34:39.012543 sshd[5387]: Connection closed by 10.0.0.1 port 36790 Feb 13 19:34:39.012925 sshd-session[5384]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:39.020561 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:36790.service: Deactivated successfully. Feb 13 19:34:39.023372 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:34:39.024601 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:34:39.025738 systemd-logind[1449]: Removed session 10. Feb 13 19:34:39.184512 kubelet[2607]: E0213 19:34:39.184407 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.190005 kubelet[2607]: E0213 19:34:39.189970 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.286220 systemd-networkd[1402]: cali4c0a1993edc: Gained IPv6LL Feb 13 19:34:39.308397 kubelet[2607]: I0213 19:34:39.307940 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-jsldt" podStartSLOduration=31.307458233 podStartE2EDuration="31.307458233s" podCreationTimestamp="2025-02-13 19:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:39.307276713 +0000 UTC m=+36.766094614" watchObservedRunningTime="2025-02-13 19:34:39.307458233 +0000 UTC m=+36.766276144" Feb 13 19:34:39.341252 kubelet[2607]: I0213 19:34:39.341040 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-szjm2" podStartSLOduration=31.341012476 podStartE2EDuration="31.341012476s" podCreationTimestamp="2025-02-13 19:34:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:34:39.324949007 +0000 UTC m=+36.783766908" watchObservedRunningTime="2025-02-13 19:34:39.341012476 +0000 UTC m=+36.799830377" Feb 13 19:34:39.427343 kubelet[2607]: I0213 19:34:39.427300 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:39.427726 kubelet[2607]: E0213 19:34:39.427697 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:39.670183 systemd-networkd[1402]: cali267d0eef4b0: Gained IPv6LL Feb 13 19:34:39.670580 systemd-networkd[1402]: cali242bdde1f60: Gained IPv6LL Feb 13 19:34:39.926133 systemd-networkd[1402]: calie8714484c67: Gained IPv6LL Feb 13 19:34:40.118172 systemd-networkd[1402]: cali0bd5a291cd3: Gained IPv6LL Feb 13 19:34:40.118528 systemd-networkd[1402]: calidc50746062b: Gained IPv6LL Feb 13 19:34:40.193122 kubelet[2607]: E0213 19:34:40.192646 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:40.193122 kubelet[2607]: E0213 19:34:40.192827 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:40.193122 kubelet[2607]: E0213 19:34:40.192839 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:40.554585 containerd[1471]: time="2025-02-13T19:34:40.553685920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:40.555410 containerd[1471]: time="2025-02-13T19:34:40.555367373Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:34:40.564498 containerd[1471]: time="2025-02-13T19:34:40.564449133Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:40.570515 containerd[1471]: time="2025-02-13T19:34:40.570450078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:40.574138 containerd[1471]: time="2025-02-13T19:34:40.573923419Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.821460069s" Feb 13 19:34:40.574138 containerd[1471]: time="2025-02-13T19:34:40.573977653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:34:40.576422 containerd[1471]: time="2025-02-13T19:34:40.576239707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:34:40.597013 containerd[1471]: time="2025-02-13T19:34:40.594481466Z" level=info msg="CreateContainer within sandbox \"591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:34:40.619355 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2641488981.mount: Deactivated successfully. Feb 13 19:34:40.624374 containerd[1471]: time="2025-02-13T19:34:40.624272334Z" level=info msg="CreateContainer within sandbox \"591a1c843325b0d221755ca794e87867bad3825efa5a4c96aefe9550abef9236\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"347700d7baf4e2d18187c43c91643810d7c9cbcab7f27e36db9758096fe5795b\"" Feb 13 19:34:40.625214 containerd[1471]: time="2025-02-13T19:34:40.625161578Z" level=info msg="StartContainer for \"347700d7baf4e2d18187c43c91643810d7c9cbcab7f27e36db9758096fe5795b\"" Feb 13 19:34:40.648997 kernel: bpftool[5514]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:34:40.664287 systemd[1]: Started cri-containerd-347700d7baf4e2d18187c43c91643810d7c9cbcab7f27e36db9758096fe5795b.scope - libcontainer container 347700d7baf4e2d18187c43c91643810d7c9cbcab7f27e36db9758096fe5795b. Feb 13 19:34:40.722457 containerd[1471]: time="2025-02-13T19:34:40.722394995Z" level=info msg="StartContainer for \"347700d7baf4e2d18187c43c91643810d7c9cbcab7f27e36db9758096fe5795b\" returns successfully" Feb 13 19:34:40.938758 systemd-networkd[1402]: vxlan.calico: Link UP Feb 13 19:34:40.938767 systemd-networkd[1402]: vxlan.calico: Gained carrier Feb 13 19:34:41.210097 kubelet[2607]: E0213 19:34:41.208944 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:41.210097 kubelet[2607]: E0213 19:34:41.209340 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:34:42.210369 kubelet[2607]: I0213 19:34:42.210325 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:42.742511 systemd-networkd[1402]: vxlan.calico: Gained IPv6LL Feb 13 19:34:44.032571 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:36796.service - OpenSSH per-connection server daemon (10.0.0.1:36796). Feb 13 19:34:44.087186 sshd[5626]: Accepted publickey for core from 10.0.0.1 port 36796 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:44.090204 sshd-session[5626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:44.097675 systemd-logind[1449]: New session 11 of user core. Feb 13 19:34:44.102233 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:34:44.256629 sshd[5632]: Connection closed by 10.0.0.1 port 36796 Feb 13 19:34:44.257362 sshd-session[5626]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:44.261675 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:36796.service: Deactivated successfully. Feb 13 19:34:44.264404 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:34:44.265242 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:34:44.266856 systemd-logind[1449]: Removed session 11. Feb 13 19:34:46.428755 containerd[1471]: time="2025-02-13T19:34:46.428688412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:46.445128 containerd[1471]: time="2025-02-13T19:34:46.445012533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:34:46.448564 containerd[1471]: time="2025-02-13T19:34:46.448505125Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:46.452474 containerd[1471]: time="2025-02-13T19:34:46.452411112Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:46.453229 containerd[1471]: time="2025-02-13T19:34:46.453179408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 5.876901699s" Feb 13 19:34:46.453323 containerd[1471]: time="2025-02-13T19:34:46.453230587Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:34:46.455702 containerd[1471]: time="2025-02-13T19:34:46.455646309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:34:46.456312 containerd[1471]: time="2025-02-13T19:34:46.456264708Z" level=info msg="CreateContainer within sandbox \"293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:34:46.483058 containerd[1471]: time="2025-02-13T19:34:46.482991581Z" level=info msg="CreateContainer within sandbox \"293373afd439aab1fc27b6e509f5be603052a4301f51ca29b3d2089195dfb425\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"327412dad3b84dd5a14de649276d2dbabb5e0098ec22a853df829935e13d1520\"" Feb 13 19:34:46.484349 containerd[1471]: time="2025-02-13T19:34:46.484317489Z" level=info msg="StartContainer for \"327412dad3b84dd5a14de649276d2dbabb5e0098ec22a853df829935e13d1520\"" Feb 13 19:34:46.529260 systemd[1]: Started cri-containerd-327412dad3b84dd5a14de649276d2dbabb5e0098ec22a853df829935e13d1520.scope - libcontainer container 327412dad3b84dd5a14de649276d2dbabb5e0098ec22a853df829935e13d1520. Feb 13 19:34:46.598277 containerd[1471]: time="2025-02-13T19:34:46.598154971Z" level=info msg="StartContainer for \"327412dad3b84dd5a14de649276d2dbabb5e0098ec22a853df829935e13d1520\" returns successfully" Feb 13 19:34:47.291331 kubelet[2607]: I0213 19:34:47.290110 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5875c56fd9-cljgm" podStartSLOduration=24.032022273 podStartE2EDuration="32.290087338s" podCreationTimestamp="2025-02-13 19:34:15 +0000 UTC" firstStartedPulling="2025-02-13 19:34:38.196799302 +0000 UTC m=+35.655617203" lastFinishedPulling="2025-02-13 19:34:46.454864367 +0000 UTC m=+43.913682268" observedRunningTime="2025-02-13 19:34:47.289987956 +0000 UTC m=+44.748805857" watchObservedRunningTime="2025-02-13 19:34:47.290087338 +0000 UTC m=+44.748905239" Feb 13 19:34:47.291918 kubelet[2607]: I0213 19:34:47.291295 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f695fb64c-rb8fz" podStartSLOduration=29.466761094 podStartE2EDuration="32.291263196s" podCreationTimestamp="2025-02-13 19:34:15 +0000 UTC" firstStartedPulling="2025-02-13 19:34:37.751004631 +0000 UTC m=+35.209822532" lastFinishedPulling="2025-02-13 19:34:40.575506733 +0000 UTC m=+38.034324634" observedRunningTime="2025-02-13 19:34:41.226022495 +0000 UTC m=+38.684840396" watchObservedRunningTime="2025-02-13 19:34:47.291263196 +0000 UTC m=+44.750081097" Feb 13 19:34:47.496621 containerd[1471]: time="2025-02-13T19:34:47.496563799Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:47.498720 containerd[1471]: time="2025-02-13T19:34:47.498670175Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:34:47.502184 containerd[1471]: time="2025-02-13T19:34:47.502142695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 1.046068091s" Feb 13 19:34:47.502184 containerd[1471]: time="2025-02-13T19:34:47.502181629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:34:47.504317 containerd[1471]: time="2025-02-13T19:34:47.503148767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:34:47.504430 containerd[1471]: time="2025-02-13T19:34:47.504374581Z" level=info msg="CreateContainer within sandbox \"d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:34:47.533544 containerd[1471]: time="2025-02-13T19:34:47.533504582Z" level=info msg="CreateContainer within sandbox \"d08065f1170006bf0a56ab0e2b9e944a3f83f94e403e6dbaf749b11f5d3afcba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"37b788c5e3dbf932c422f70dde05980861d161f655a376ffd78b088a677b9f86\"" Feb 13 19:34:47.535511 containerd[1471]: time="2025-02-13T19:34:47.535001267Z" level=info msg="StartContainer for \"37b788c5e3dbf932c422f70dde05980861d161f655a376ffd78b088a677b9f86\"" Feb 13 19:34:47.577272 systemd[1]: Started cri-containerd-37b788c5e3dbf932c422f70dde05980861d161f655a376ffd78b088a677b9f86.scope - libcontainer container 37b788c5e3dbf932c422f70dde05980861d161f655a376ffd78b088a677b9f86. Feb 13 19:34:47.643533 containerd[1471]: time="2025-02-13T19:34:47.643474933Z" level=info msg="StartContainer for \"37b788c5e3dbf932c422f70dde05980861d161f655a376ffd78b088a677b9f86\" returns successfully" Feb 13 19:34:48.232110 kubelet[2607]: I0213 19:34:48.232070 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:49.233993 kubelet[2607]: I0213 19:34:49.233918 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:49.269916 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:37054.service - OpenSSH per-connection server daemon (10.0.0.1:37054). Feb 13 19:34:49.325830 sshd[5739]: Accepted publickey for core from 10.0.0.1 port 37054 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:49.327669 sshd-session[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:49.333050 systemd-logind[1449]: New session 12 of user core. Feb 13 19:34:49.343246 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:34:49.379313 containerd[1471]: time="2025-02-13T19:34:49.379254710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:49.380060 containerd[1471]: time="2025-02-13T19:34:49.380019117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:34:49.381289 containerd[1471]: time="2025-02-13T19:34:49.381231153Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:49.384269 containerd[1471]: time="2025-02-13T19:34:49.384233343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:49.385095 containerd[1471]: time="2025-02-13T19:34:49.385049709Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.881868461s" Feb 13 19:34:49.385095 containerd[1471]: time="2025-02-13T19:34:49.385084666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:34:49.387525 containerd[1471]: time="2025-02-13T19:34:49.387480263Z" level=info msg="CreateContainer within sandbox \"dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:34:49.417716 containerd[1471]: time="2025-02-13T19:34:49.417302872Z" level=info msg="CreateContainer within sandbox \"dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9aa5b9110b490ceb385d77fa7eaf94fd9f2cd59ff9bf7197a02c9e0046155282\"" Feb 13 19:34:49.418314 containerd[1471]: time="2025-02-13T19:34:49.418276680Z" level=info msg="StartContainer for \"9aa5b9110b490ceb385d77fa7eaf94fd9f2cd59ff9bf7197a02c9e0046155282\"" Feb 13 19:34:49.457169 systemd[1]: Started cri-containerd-9aa5b9110b490ceb385d77fa7eaf94fd9f2cd59ff9bf7197a02c9e0046155282.scope - libcontainer container 9aa5b9110b490ceb385d77fa7eaf94fd9f2cd59ff9bf7197a02c9e0046155282. Feb 13 19:34:49.505148 containerd[1471]: time="2025-02-13T19:34:49.505096538Z" level=info msg="StartContainer for \"9aa5b9110b490ceb385d77fa7eaf94fd9f2cd59ff9bf7197a02c9e0046155282\" returns successfully" Feb 13 19:34:49.508219 containerd[1471]: time="2025-02-13T19:34:49.508174994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:34:49.518148 sshd[5745]: Connection closed by 10.0.0.1 port 37054 Feb 13 19:34:49.519071 sshd-session[5739]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:49.524885 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:34:49.525278 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:37054.service: Deactivated successfully. Feb 13 19:34:49.527519 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:34:49.528444 systemd-logind[1449]: Removed session 12. Feb 13 19:34:53.165375 containerd[1471]: time="2025-02-13T19:34:53.165303832Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:53.166321 containerd[1471]: time="2025-02-13T19:34:53.166241638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:34:53.167824 containerd[1471]: time="2025-02-13T19:34:53.167780274Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:53.170365 containerd[1471]: time="2025-02-13T19:34:53.170331860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:34:53.171013 containerd[1471]: time="2025-02-13T19:34:53.170971165Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 3.662741115s" Feb 13 19:34:53.171068 containerd[1471]: time="2025-02-13T19:34:53.171018335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:34:53.173361 containerd[1471]: time="2025-02-13T19:34:53.173316266Z" level=info msg="CreateContainer within sandbox \"dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:34:53.375644 containerd[1471]: time="2025-02-13T19:34:53.375575784Z" level=info msg="CreateContainer within sandbox \"dfa0dde059ae3b6720a0ec63c1331acad311c2fbcd66c518e89185783a3094f7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"970d1fc5f1c7d0af3fc8b7c3c8ac39d98bc8ccb776dbddb01938a88d3ba7ff2f\"" Feb 13 19:34:53.376170 containerd[1471]: time="2025-02-13T19:34:53.376137109Z" level=info msg="StartContainer for \"970d1fc5f1c7d0af3fc8b7c3c8ac39d98bc8ccb776dbddb01938a88d3ba7ff2f\"" Feb 13 19:34:53.411356 systemd[1]: Started cri-containerd-970d1fc5f1c7d0af3fc8b7c3c8ac39d98bc8ccb776dbddb01938a88d3ba7ff2f.scope - libcontainer container 970d1fc5f1c7d0af3fc8b7c3c8ac39d98bc8ccb776dbddb01938a88d3ba7ff2f. Feb 13 19:34:53.451381 containerd[1471]: time="2025-02-13T19:34:53.451264090Z" level=info msg="StartContainer for \"970d1fc5f1c7d0af3fc8b7c3c8ac39d98bc8ccb776dbddb01938a88d3ba7ff2f\" returns successfully" Feb 13 19:34:53.749824 kubelet[2607]: I0213 19:34:53.749790 2607 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:34:53.749824 kubelet[2607]: I0213 19:34:53.749831 2607 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:34:54.293470 kubelet[2607]: I0213 19:34:54.293399 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5875c56fd9-b2rnd" podStartSLOduration=30.194724274 podStartE2EDuration="39.293379414s" podCreationTimestamp="2025-02-13 19:34:15 +0000 UTC" firstStartedPulling="2025-02-13 19:34:38.404289395 +0000 UTC m=+35.863107296" lastFinishedPulling="2025-02-13 19:34:47.502944535 +0000 UTC m=+44.961762436" observedRunningTime="2025-02-13 19:34:48.392654602 +0000 UTC m=+45.851472513" watchObservedRunningTime="2025-02-13 19:34:54.293379414 +0000 UTC m=+51.752197315" Feb 13 19:34:54.530990 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:43452.service - OpenSSH per-connection server daemon (10.0.0.1:43452). Feb 13 19:34:54.599029 sshd[5843]: Accepted publickey for core from 10.0.0.1 port 43452 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:54.601039 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:54.605667 systemd-logind[1449]: New session 13 of user core. Feb 13 19:34:54.609100 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:34:54.781739 sshd[5845]: Connection closed by 10.0.0.1 port 43452 Feb 13 19:34:54.782190 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:54.799635 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:43452.service: Deactivated successfully. Feb 13 19:34:54.802326 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:34:54.804039 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:34:54.805441 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:43466.service - OpenSSH per-connection server daemon (10.0.0.1:43466). Feb 13 19:34:54.807004 systemd-logind[1449]: Removed session 13. Feb 13 19:34:54.862235 sshd[5860]: Accepted publickey for core from 10.0.0.1 port 43466 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:54.864051 sshd-session[5860]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:54.868405 systemd-logind[1449]: New session 14 of user core. Feb 13 19:34:54.880214 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:34:55.140047 sshd[5862]: Connection closed by 10.0.0.1 port 43466 Feb 13 19:34:55.140342 sshd-session[5860]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:55.155504 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:43466.service: Deactivated successfully. Feb 13 19:34:55.157759 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:34:55.159414 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:34:55.166158 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:43470.service - OpenSSH per-connection server daemon (10.0.0.1:43470). Feb 13 19:34:55.167268 systemd-logind[1449]: Removed session 14. Feb 13 19:34:55.208904 sshd[5873]: Accepted publickey for core from 10.0.0.1 port 43470 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:34:55.210349 sshd-session[5873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:34:55.214724 systemd-logind[1449]: New session 15 of user core. Feb 13 19:34:55.224202 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:34:55.401726 sshd[5875]: Connection closed by 10.0.0.1 port 43470 Feb 13 19:34:55.402102 sshd-session[5873]: pam_unix(sshd:session): session closed for user core Feb 13 19:34:55.406645 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:43470.service: Deactivated successfully. Feb 13 19:34:55.408821 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:34:55.409724 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:34:55.410711 systemd-logind[1449]: Removed session 15. Feb 13 19:34:57.648876 kubelet[2607]: I0213 19:34:57.648793 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:34:57.750365 kubelet[2607]: I0213 19:34:57.750062 2607 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qqgd5" podStartSLOduration=28.028965289 podStartE2EDuration="42.750041074s" podCreationTimestamp="2025-02-13 19:34:15 +0000 UTC" firstStartedPulling="2025-02-13 19:34:38.450714578 +0000 UTC m=+35.909532479" lastFinishedPulling="2025-02-13 19:34:53.171790363 +0000 UTC m=+50.630608264" observedRunningTime="2025-02-13 19:34:54.293302468 +0000 UTC m=+51.752120379" watchObservedRunningTime="2025-02-13 19:34:57.750041074 +0000 UTC m=+55.208858975" Feb 13 19:34:58.101464 kubelet[2607]: I0213 19:34:58.101422 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:35:00.419036 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:43480.service - OpenSSH per-connection server daemon (10.0.0.1:43480). Feb 13 19:35:00.470588 sshd[5929]: Accepted publickey for core from 10.0.0.1 port 43480 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:00.472610 sshd-session[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:00.478680 systemd-logind[1449]: New session 16 of user core. Feb 13 19:35:00.486293 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:35:00.587120 kubelet[2607]: I0213 19:35:00.587060 2607 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:35:00.690338 sshd[5931]: Connection closed by 10.0.0.1 port 43480 Feb 13 19:35:00.690536 sshd-session[5929]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:00.695941 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:43480.service: Deactivated successfully. Feb 13 19:35:00.699070 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:35:00.700879 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:35:00.701919 systemd-logind[1449]: Removed session 16. Feb 13 19:35:02.638704 containerd[1471]: time="2025-02-13T19:35:02.638656205Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:35:02.639258 containerd[1471]: time="2025-02-13T19:35:02.638807353Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:35:02.639258 containerd[1471]: time="2025-02-13T19:35:02.638822843Z" level=info msg="StopPodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:35:02.645364 containerd[1471]: time="2025-02-13T19:35:02.645297637Z" level=info msg="RemovePodSandbox for \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:35:02.657718 containerd[1471]: time="2025-02-13T19:35:02.657651859Z" level=info msg="Forcibly stopping sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\"" Feb 13 19:35:02.657901 containerd[1471]: time="2025-02-13T19:35:02.657829468Z" level=info msg="TearDown network for sandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" successfully" Feb 13 19:35:02.743994 containerd[1471]: time="2025-02-13T19:35:02.743897200Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.744193 containerd[1471]: time="2025-02-13T19:35:02.744017199Z" level=info msg="RemovePodSandbox \"3bf8c6f6fa3408c17c0a5cb18984a1b18b1c350fbd4f4341429f107be6ca1f1b\" returns successfully" Feb 13 19:35:02.744614 containerd[1471]: time="2025-02-13T19:35:02.744579882Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:35:02.744731 containerd[1471]: time="2025-02-13T19:35:02.744705321Z" level=info msg="TearDown network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" successfully" Feb 13 19:35:02.744731 containerd[1471]: time="2025-02-13T19:35:02.744722855Z" level=info msg="StopPodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" returns successfully" Feb 13 19:35:02.745039 containerd[1471]: time="2025-02-13T19:35:02.744944558Z" level=info msg="RemovePodSandbox for \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:35:02.745039 containerd[1471]: time="2025-02-13T19:35:02.744996638Z" level=info msg="Forcibly stopping sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\"" Feb 13 19:35:02.745146 containerd[1471]: time="2025-02-13T19:35:02.745086689Z" level=info msg="TearDown network for sandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" successfully" Feb 13 19:35:02.799077 containerd[1471]: time="2025-02-13T19:35:02.799016981Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.799340 containerd[1471]: time="2025-02-13T19:35:02.799101331Z" level=info msg="RemovePodSandbox \"c207751ef81c77cf11834cc9d214df5236d504a114b5d8c9091976394140218b\" returns successfully" Feb 13 19:35:02.799680 containerd[1471]: time="2025-02-13T19:35:02.799655218Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" Feb 13 19:35:02.799816 containerd[1471]: time="2025-02-13T19:35:02.799794554Z" level=info msg="TearDown network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" successfully" Feb 13 19:35:02.799816 containerd[1471]: time="2025-02-13T19:35:02.799813049Z" level=info msg="StopPodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" returns successfully" Feb 13 19:35:02.800243 containerd[1471]: time="2025-02-13T19:35:02.800202722Z" level=info msg="RemovePodSandbox for \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" Feb 13 19:35:02.800299 containerd[1471]: time="2025-02-13T19:35:02.800247879Z" level=info msg="Forcibly stopping sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\"" Feb 13 19:35:02.800416 containerd[1471]: time="2025-02-13T19:35:02.800369661Z" level=info msg="TearDown network for sandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" successfully" Feb 13 19:35:02.813113 containerd[1471]: time="2025-02-13T19:35:02.813051679Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.813510 containerd[1471]: time="2025-02-13T19:35:02.813342414Z" level=info msg="RemovePodSandbox \"129ca2d51794333b0651f81fbb83f78853ee604014c43d4d7646cf5817de245d\" returns successfully" Feb 13 19:35:02.814216 containerd[1471]: time="2025-02-13T19:35:02.813876703Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\"" Feb 13 19:35:02.814216 containerd[1471]: time="2025-02-13T19:35:02.814006360Z" level=info msg="TearDown network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" successfully" Feb 13 19:35:02.814216 containerd[1471]: time="2025-02-13T19:35:02.814018223Z" level=info msg="StopPodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" returns successfully" Feb 13 19:35:02.814561 containerd[1471]: time="2025-02-13T19:35:02.814522114Z" level=info msg="RemovePodSandbox for \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\"" Feb 13 19:35:02.814561 containerd[1471]: time="2025-02-13T19:35:02.814545920Z" level=info msg="Forcibly stopping sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\"" Feb 13 19:35:02.814729 containerd[1471]: time="2025-02-13T19:35:02.814611395Z" level=info msg="TearDown network for sandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" successfully" Feb 13 19:35:02.823276 containerd[1471]: time="2025-02-13T19:35:02.823237765Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.823366 containerd[1471]: time="2025-02-13T19:35:02.823295916Z" level=info msg="RemovePodSandbox \"1031254106aaef81fc07c832cb6e70669aeedf00af14ec36dfccb4f897bf3a9e\" returns successfully" Feb 13 19:35:02.823778 containerd[1471]: time="2025-02-13T19:35:02.823750182Z" level=info msg="StopPodSandbox for \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\"" Feb 13 19:35:02.823886 containerd[1471]: time="2025-02-13T19:35:02.823869890Z" level=info msg="TearDown network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" successfully" Feb 13 19:35:02.823913 containerd[1471]: time="2025-02-13T19:35:02.823885360Z" level=info msg="StopPodSandbox for \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" returns successfully" Feb 13 19:35:02.824388 containerd[1471]: time="2025-02-13T19:35:02.824278480Z" level=info msg="RemovePodSandbox for \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\"" Feb 13 19:35:02.824388 containerd[1471]: time="2025-02-13T19:35:02.824384773Z" level=info msg="Forcibly stopping sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\"" Feb 13 19:35:02.824536 containerd[1471]: time="2025-02-13T19:35:02.824452954Z" level=info msg="TearDown network for sandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" successfully" Feb 13 19:35:02.834095 containerd[1471]: time="2025-02-13T19:35:02.833817821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.834095 containerd[1471]: time="2025-02-13T19:35:02.833927321Z" level=info msg="RemovePodSandbox \"97dd6737930f1325b26522932f2dc46e8827490262c12ec40f5c15955f54a59e\" returns successfully" Feb 13 19:35:02.834718 containerd[1471]: time="2025-02-13T19:35:02.834665900Z" level=info msg="StopPodSandbox for \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\"" Feb 13 19:35:02.834881 containerd[1471]: time="2025-02-13T19:35:02.834851013Z" level=info msg="TearDown network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\" successfully" Feb 13 19:35:02.834928 containerd[1471]: time="2025-02-13T19:35:02.834876392Z" level=info msg="StopPodSandbox for \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\" returns successfully" Feb 13 19:35:02.835895 containerd[1471]: time="2025-02-13T19:35:02.835860549Z" level=info msg="RemovePodSandbox for \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\"" Feb 13 19:35:02.835940 containerd[1471]: time="2025-02-13T19:35:02.835894404Z" level=info msg="Forcibly stopping sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\"" Feb 13 19:35:02.836065 containerd[1471]: time="2025-02-13T19:35:02.835996058Z" level=info msg="TearDown network for sandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\" successfully" Feb 13 19:35:02.842377 containerd[1471]: time="2025-02-13T19:35:02.842323782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.842550 containerd[1471]: time="2025-02-13T19:35:02.842399335Z" level=info msg="RemovePodSandbox \"cd8b258c9599b776a51885aaf8a68589e7fddaa178420bfa0ee33ad50d617e00\" returns successfully" Feb 13 19:35:02.842998 containerd[1471]: time="2025-02-13T19:35:02.842936781Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:35:02.843148 containerd[1471]: time="2025-02-13T19:35:02.843113529Z" level=info msg="TearDown network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" successfully" Feb 13 19:35:02.843148 containerd[1471]: time="2025-02-13T19:35:02.843132635Z" level=info msg="StopPodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" returns successfully" Feb 13 19:35:02.845775 containerd[1471]: time="2025-02-13T19:35:02.843444109Z" level=info msg="RemovePodSandbox for \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:35:02.845775 containerd[1471]: time="2025-02-13T19:35:02.843473936Z" level=info msg="Forcibly stopping sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\"" Feb 13 19:35:02.845775 containerd[1471]: time="2025-02-13T19:35:02.843570461Z" level=info msg="TearDown network for sandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" successfully" Feb 13 19:35:02.847809 containerd[1471]: time="2025-02-13T19:35:02.847783258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.847933 containerd[1471]: time="2025-02-13T19:35:02.847817614Z" level=info msg="RemovePodSandbox \"aa566766537fcbea79ee7dfd711b3aa1da552d84a540cd31d2773ed5d6234a37\" returns successfully" Feb 13 19:35:02.848318 containerd[1471]: time="2025-02-13T19:35:02.848269836Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" Feb 13 19:35:02.848472 containerd[1471]: time="2025-02-13T19:35:02.848442085Z" level=info msg="TearDown network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" successfully" Feb 13 19:35:02.848472 containerd[1471]: time="2025-02-13T19:35:02.848466743Z" level=info msg="StopPodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" returns successfully" Feb 13 19:35:02.848929 containerd[1471]: time="2025-02-13T19:35:02.848884670Z" level=info msg="RemovePodSandbox for \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" Feb 13 19:35:02.848929 containerd[1471]: time="2025-02-13T19:35:02.848912934Z" level=info msg="Forcibly stopping sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\"" Feb 13 19:35:02.849156 containerd[1471]: time="2025-02-13T19:35:02.849003426Z" level=info msg="TearDown network for sandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" successfully" Feb 13 19:35:02.852853 containerd[1471]: time="2025-02-13T19:35:02.852805220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.852853 containerd[1471]: time="2025-02-13T19:35:02.852854715Z" level=info msg="RemovePodSandbox \"f0010b009040dc3632feee176ef184fca2c082b092fb675b7d8242d302a9ab90\" returns successfully" Feb 13 19:35:02.853176 containerd[1471]: time="2025-02-13T19:35:02.853140530Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\"" Feb 13 19:35:02.853262 containerd[1471]: time="2025-02-13T19:35:02.853240300Z" level=info msg="TearDown network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" successfully" Feb 13 19:35:02.853262 containerd[1471]: time="2025-02-13T19:35:02.853251653Z" level=info msg="StopPodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" returns successfully" Feb 13 19:35:02.853761 containerd[1471]: time="2025-02-13T19:35:02.853726037Z" level=info msg="RemovePodSandbox for \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\"" Feb 13 19:35:02.853761 containerd[1471]: time="2025-02-13T19:35:02.853749893Z" level=info msg="Forcibly stopping sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\"" Feb 13 19:35:02.853858 containerd[1471]: time="2025-02-13T19:35:02.853824455Z" level=info msg="TearDown network for sandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" successfully" Feb 13 19:35:02.857809 containerd[1471]: time="2025-02-13T19:35:02.857723946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.857809 containerd[1471]: time="2025-02-13T19:35:02.857764943Z" level=info msg="RemovePodSandbox \"b0d6fd3ef2acae598dec1902e1434b0974fcd3025a8b93c199149d21301516da\" returns successfully" Feb 13 19:35:02.858217 containerd[1471]: time="2025-02-13T19:35:02.858041541Z" level=info msg="StopPodSandbox for \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\"" Feb 13 19:35:02.858337 containerd[1471]: time="2025-02-13T19:35:02.858276510Z" level=info msg="TearDown network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" successfully" Feb 13 19:35:02.858337 containerd[1471]: time="2025-02-13T19:35:02.858288934Z" level=info msg="StopPodSandbox for \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" returns successfully" Feb 13 19:35:02.858599 containerd[1471]: time="2025-02-13T19:35:02.858571633Z" level=info msg="RemovePodSandbox for \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\"" Feb 13 19:35:02.858599 containerd[1471]: time="2025-02-13T19:35:02.858593495Z" level=info msg="Forcibly stopping sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\"" Feb 13 19:35:02.858785 containerd[1471]: time="2025-02-13T19:35:02.858658419Z" level=info msg="TearDown network for sandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" successfully" Feb 13 19:35:02.863067 containerd[1471]: time="2025-02-13T19:35:02.863001576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.863163 containerd[1471]: time="2025-02-13T19:35:02.863103139Z" level=info msg="RemovePodSandbox \"501ac9e2d1b70d54a27304edd774f71526678c0f70b0590756ecc2311b11da0b\" returns successfully" Feb 13 19:35:02.863559 containerd[1471]: time="2025-02-13T19:35:02.863533811Z" level=info msg="StopPodSandbox for \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\"" Feb 13 19:35:02.864367 containerd[1471]: time="2025-02-13T19:35:02.863668929Z" level=info msg="TearDown network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\" successfully" Feb 13 19:35:02.864367 containerd[1471]: time="2025-02-13T19:35:02.863684047Z" level=info msg="StopPodSandbox for \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\" returns successfully" Feb 13 19:35:02.864596 containerd[1471]: time="2025-02-13T19:35:02.864560059Z" level=info msg="RemovePodSandbox for \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\"" Feb 13 19:35:02.864596 containerd[1471]: time="2025-02-13T19:35:02.864590287Z" level=info msg="Forcibly stopping sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\"" Feb 13 19:35:02.864771 containerd[1471]: time="2025-02-13T19:35:02.864675760Z" level=info msg="TearDown network for sandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\" successfully" Feb 13 19:35:02.870070 containerd[1471]: time="2025-02-13T19:35:02.870004988Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.870070 containerd[1471]: time="2025-02-13T19:35:02.870062799Z" level=info msg="RemovePodSandbox \"86a6ca50deb416c653bd1f7b57fc5061ed1acbb2841494199dff84d76e7acebd\" returns successfully" Feb 13 19:35:02.870386 containerd[1471]: time="2025-02-13T19:35:02.870359174Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:35:02.870464 containerd[1471]: time="2025-02-13T19:35:02.870443185Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:35:02.870464 containerd[1471]: time="2025-02-13T19:35:02.870456470Z" level=info msg="StopPodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:35:02.870705 containerd[1471]: time="2025-02-13T19:35:02.870681990Z" level=info msg="RemovePodSandbox for \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:35:02.870705 containerd[1471]: time="2025-02-13T19:35:02.870702539Z" level=info msg="Forcibly stopping sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\"" Feb 13 19:35:02.870819 containerd[1471]: time="2025-02-13T19:35:02.870760350Z" level=info msg="TearDown network for sandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" successfully" Feb 13 19:35:02.874542 containerd[1471]: time="2025-02-13T19:35:02.874507650Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.874627 containerd[1471]: time="2025-02-13T19:35:02.874547586Z" level=info msg="RemovePodSandbox \"74317ff9bf14938d11d9e036deef95eddf63bb81bba8dd99f45333872239830c\" returns successfully" Feb 13 19:35:02.874900 containerd[1471]: time="2025-02-13T19:35:02.874765051Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:35:02.874900 containerd[1471]: time="2025-02-13T19:35:02.874844663Z" level=info msg="TearDown network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" successfully" Feb 13 19:35:02.874900 containerd[1471]: time="2025-02-13T19:35:02.874854481Z" level=info msg="StopPodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" returns successfully" Feb 13 19:35:02.875468 containerd[1471]: time="2025-02-13T19:35:02.875422476Z" level=info msg="RemovePodSandbox for \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:35:02.875508 containerd[1471]: time="2025-02-13T19:35:02.875477339Z" level=info msg="Forcibly stopping sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\"" Feb 13 19:35:02.875704 containerd[1471]: time="2025-02-13T19:35:02.875640982Z" level=info msg="TearDown network for sandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" successfully" Feb 13 19:35:02.880372 containerd[1471]: time="2025-02-13T19:35:02.880318537Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.880437 containerd[1471]: time="2025-02-13T19:35:02.880397869Z" level=info msg="RemovePodSandbox \"61b0a4a05bfa4906e5a299438b6dbb931cae8d64bcd9531789f49e1f12f0c1b7\" returns successfully" Feb 13 19:35:02.880738 containerd[1471]: time="2025-02-13T19:35:02.880717919Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" Feb 13 19:35:02.880817 containerd[1471]: time="2025-02-13T19:35:02.880802711Z" level=info msg="TearDown network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" successfully" Feb 13 19:35:02.880840 containerd[1471]: time="2025-02-13T19:35:02.880815014Z" level=info msg="StopPodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" returns successfully" Feb 13 19:35:02.881289 containerd[1471]: time="2025-02-13T19:35:02.881264001Z" level=info msg="RemovePodSandbox for \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" Feb 13 19:35:02.881328 containerd[1471]: time="2025-02-13T19:35:02.881290923Z" level=info msg="Forcibly stopping sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\"" Feb 13 19:35:02.881438 containerd[1471]: time="2025-02-13T19:35:02.881358101Z" level=info msg="TearDown network for sandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" successfully" Feb 13 19:35:02.885534 containerd[1471]: time="2025-02-13T19:35:02.885481709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.885600 containerd[1471]: time="2025-02-13T19:35:02.885561762Z" level=info msg="RemovePodSandbox \"a562c6ed21eb8ef4ab42744d889219e92d5054c0611e8d3149cecda822e32115\" returns successfully" Feb 13 19:35:02.886144 containerd[1471]: time="2025-02-13T19:35:02.886117983Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\"" Feb 13 19:35:02.886270 containerd[1471]: time="2025-02-13T19:35:02.886251687Z" level=info msg="TearDown network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" successfully" Feb 13 19:35:02.886305 containerd[1471]: time="2025-02-13T19:35:02.886269522Z" level=info msg="StopPodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" returns successfully" Feb 13 19:35:02.886712 containerd[1471]: time="2025-02-13T19:35:02.886672400Z" level=info msg="RemovePodSandbox for \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\"" Feb 13 19:35:02.886712 containerd[1471]: time="2025-02-13T19:35:02.886707918Z" level=info msg="Forcibly stopping sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\"" Feb 13 19:35:02.886927 containerd[1471]: time="2025-02-13T19:35:02.886793481Z" level=info msg="TearDown network for sandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" successfully" Feb 13 19:35:02.890760 containerd[1471]: time="2025-02-13T19:35:02.890660791Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.890760 containerd[1471]: time="2025-02-13T19:35:02.890713901Z" level=info msg="RemovePodSandbox \"0bb1d45c2c150f5d872fac8c961399f27779cb73a87b22395960ffa2be15d1a8\" returns successfully" Feb 13 19:35:02.891095 containerd[1471]: time="2025-02-13T19:35:02.890936246Z" level=info msg="StopPodSandbox for \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\"" Feb 13 19:35:02.891095 containerd[1471]: time="2025-02-13T19:35:02.891063198Z" level=info msg="TearDown network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" successfully" Feb 13 19:35:02.891095 containerd[1471]: time="2025-02-13T19:35:02.891080130Z" level=info msg="StopPodSandbox for \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" returns successfully" Feb 13 19:35:02.891403 containerd[1471]: time="2025-02-13T19:35:02.891370465Z" level=info msg="RemovePodSandbox for \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\"" Feb 13 19:35:02.891403 containerd[1471]: time="2025-02-13T19:35:02.891395412Z" level=info msg="Forcibly stopping sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\"" Feb 13 19:35:02.891510 containerd[1471]: time="2025-02-13T19:35:02.891481316Z" level=info msg="TearDown network for sandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" successfully" Feb 13 19:35:02.898288 containerd[1471]: time="2025-02-13T19:35:02.898236946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.898288 containerd[1471]: time="2025-02-13T19:35:02.898278916Z" level=info msg="RemovePodSandbox \"ead65a76249d719a53d5c094cc281aefba7a78a7130280cc86306f191dd30278\" returns successfully" Feb 13 19:35:02.898590 containerd[1471]: time="2025-02-13T19:35:02.898554442Z" level=info msg="StopPodSandbox for \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\"" Feb 13 19:35:02.898688 containerd[1471]: time="2025-02-13T19:35:02.898658801Z" level=info msg="TearDown network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\" successfully" Feb 13 19:35:02.898688 containerd[1471]: time="2025-02-13T19:35:02.898678859Z" level=info msg="StopPodSandbox for \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\" returns successfully" Feb 13 19:35:02.898985 containerd[1471]: time="2025-02-13T19:35:02.898940429Z" level=info msg="RemovePodSandbox for \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\"" Feb 13 19:35:02.899020 containerd[1471]: time="2025-02-13T19:35:02.898994190Z" level=info msg="Forcibly stopping sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\"" Feb 13 19:35:02.899107 containerd[1471]: time="2025-02-13T19:35:02.899078311Z" level=info msg="TearDown network for sandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\" successfully" Feb 13 19:35:02.903214 containerd[1471]: time="2025-02-13T19:35:02.903189796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.903294 containerd[1471]: time="2025-02-13T19:35:02.903233459Z" level=info msg="RemovePodSandbox \"031441debf703c105e4025a6aa092a2cce8426cb78e7c00dc5573e7f67b8abb2\" returns successfully" Feb 13 19:35:02.903595 containerd[1471]: time="2025-02-13T19:35:02.903558941Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:35:02.903724 containerd[1471]: time="2025-02-13T19:35:02.903704949Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:35:02.903724 containerd[1471]: time="2025-02-13T19:35:02.903719367Z" level=info msg="StopPodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:35:02.904008 containerd[1471]: time="2025-02-13T19:35:02.903978532Z" level=info msg="RemovePodSandbox for \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:35:02.904008 containerd[1471]: time="2025-02-13T19:35:02.903997477Z" level=info msg="Forcibly stopping sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\"" Feb 13 19:35:02.904119 containerd[1471]: time="2025-02-13T19:35:02.904066479Z" level=info msg="TearDown network for sandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" successfully" Feb 13 19:35:02.908895 containerd[1471]: time="2025-02-13T19:35:02.908846239Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.908972 containerd[1471]: time="2025-02-13T19:35:02.908935389Z" level=info msg="RemovePodSandbox \"edf3205debe8fad6533b2fdb2ea0be6081befe8af003e319ea5f631de16cbd2f\" returns successfully" Feb 13 19:35:02.909361 containerd[1471]: time="2025-02-13T19:35:02.909330864Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:35:02.909506 containerd[1471]: time="2025-02-13T19:35:02.909482022Z" level=info msg="TearDown network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" successfully" Feb 13 19:35:02.909536 containerd[1471]: time="2025-02-13T19:35:02.909503073Z" level=info msg="StopPodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" returns successfully" Feb 13 19:35:02.909749 containerd[1471]: time="2025-02-13T19:35:02.909724695Z" level=info msg="RemovePodSandbox for \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:35:02.909791 containerd[1471]: time="2025-02-13T19:35:02.909748240Z" level=info msg="Forcibly stopping sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\"" Feb 13 19:35:02.909889 containerd[1471]: time="2025-02-13T19:35:02.909835547Z" level=info msg="TearDown network for sandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" successfully" Feb 13 19:35:02.914326 containerd[1471]: time="2025-02-13T19:35:02.914284245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.914383 containerd[1471]: time="2025-02-13T19:35:02.914332327Z" level=info msg="RemovePodSandbox \"a03ca714ca808a2148d76b09fbccf7564f4b62a6558236571eb5725797eee6e7\" returns successfully" Feb 13 19:35:02.914619 containerd[1471]: time="2025-02-13T19:35:02.914587774Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" Feb 13 19:35:02.914707 containerd[1471]: time="2025-02-13T19:35:02.914686873Z" level=info msg="TearDown network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" successfully" Feb 13 19:35:02.914769 containerd[1471]: time="2025-02-13T19:35:02.914710548Z" level=info msg="StopPodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" returns successfully" Feb 13 19:35:02.914939 containerd[1471]: time="2025-02-13T19:35:02.914905841Z" level=info msg="RemovePodSandbox for \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" Feb 13 19:35:02.914939 containerd[1471]: time="2025-02-13T19:35:02.914931229Z" level=info msg="Forcibly stopping sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\"" Feb 13 19:35:02.915152 containerd[1471]: time="2025-02-13T19:35:02.915018105Z" level=info msg="TearDown network for sandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" successfully" Feb 13 19:35:02.919133 containerd[1471]: time="2025-02-13T19:35:02.919104121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.921996 containerd[1471]: time="2025-02-13T19:35:02.919470170Z" level=info msg="RemovePodSandbox \"06ef3d0e0ebe408a479b06c02f0339a62b7b2fcde9d04ece8f023dcc846230ff\" returns successfully" Feb 13 19:35:02.927457 containerd[1471]: time="2025-02-13T19:35:02.927425480Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\"" Feb 13 19:35:02.927752 containerd[1471]: time="2025-02-13T19:35:02.927678151Z" level=info msg="TearDown network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" successfully" Feb 13 19:35:02.927752 containerd[1471]: time="2025-02-13T19:35:02.927727094Z" level=info msg="StopPodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" returns successfully" Feb 13 19:35:02.928215 containerd[1471]: time="2025-02-13T19:35:02.928181301Z" level=info msg="RemovePodSandbox for \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\"" Feb 13 19:35:02.928215 containerd[1471]: time="2025-02-13T19:35:02.928207371Z" level=info msg="Forcibly stopping sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\"" Feb 13 19:35:02.928313 containerd[1471]: time="2025-02-13T19:35:02.928281884Z" level=info msg="TearDown network for sandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" successfully" Feb 13 19:35:02.932425 containerd[1471]: time="2025-02-13T19:35:02.932388388Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.932482 containerd[1471]: time="2025-02-13T19:35:02.932429557Z" level=info msg="RemovePodSandbox \"d2d9fde54ceac4a62dbea2f435664c6a7f34a7446a3af59e0f58b5f00402b958\" returns successfully" Feb 13 19:35:02.932708 containerd[1471]: time="2025-02-13T19:35:02.932677771Z" level=info msg="StopPodSandbox for \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\"" Feb 13 19:35:02.932809 containerd[1471]: time="2025-02-13T19:35:02.932760869Z" level=info msg="TearDown network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" successfully" Feb 13 19:35:02.932809 containerd[1471]: time="2025-02-13T19:35:02.932800896Z" level=info msg="StopPodSandbox for \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" returns successfully" Feb 13 19:35:02.933018 containerd[1471]: time="2025-02-13T19:35:02.933000667Z" level=info msg="RemovePodSandbox for \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\"" Feb 13 19:35:02.933080 containerd[1471]: time="2025-02-13T19:35:02.933020034Z" level=info msg="Forcibly stopping sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\"" Feb 13 19:35:02.933123 containerd[1471]: time="2025-02-13T19:35:02.933095457Z" level=info msg="TearDown network for sandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" successfully" Feb 13 19:35:02.936840 containerd[1471]: time="2025-02-13T19:35:02.936813772Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.936893 containerd[1471]: time="2025-02-13T19:35:02.936850021Z" level=info msg="RemovePodSandbox \"31ac702d7bb22517e09749816249e112fb40e9f991e46e2561c8bf0e3eca0037\" returns successfully" Feb 13 19:35:02.937385 containerd[1471]: time="2025-02-13T19:35:02.937234726Z" level=info msg="StopPodSandbox for \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\"" Feb 13 19:35:02.937385 containerd[1471]: time="2025-02-13T19:35:02.937315289Z" level=info msg="TearDown network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\" successfully" Feb 13 19:35:02.937385 containerd[1471]: time="2025-02-13T19:35:02.937345206Z" level=info msg="StopPodSandbox for \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\" returns successfully" Feb 13 19:35:02.937705 containerd[1471]: time="2025-02-13T19:35:02.937665668Z" level=info msg="RemovePodSandbox for \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\"" Feb 13 19:35:02.937740 containerd[1471]: time="2025-02-13T19:35:02.937714190Z" level=info msg="Forcibly stopping sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\"" Feb 13 19:35:02.937886 containerd[1471]: time="2025-02-13T19:35:02.937831033Z" level=info msg="TearDown network for sandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\" successfully" Feb 13 19:35:02.941857 containerd[1471]: time="2025-02-13T19:35:02.941829773Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.941906 containerd[1471]: time="2025-02-13T19:35:02.941869459Z" level=info msg="RemovePodSandbox \"e8758c1e58cd992a88100a40db0a296fb3ff15c8c61693316efe45a1dabc6060\" returns successfully" Feb 13 19:35:02.942144 containerd[1471]: time="2025-02-13T19:35:02.942108454Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:35:02.942308 containerd[1471]: time="2025-02-13T19:35:02.942187456Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:35:02.942308 containerd[1471]: time="2025-02-13T19:35:02.942197865Z" level=info msg="StopPodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:35:02.942410 containerd[1471]: time="2025-02-13T19:35:02.942391345Z" level=info msg="RemovePodSandbox for \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:35:02.942456 containerd[1471]: time="2025-02-13T19:35:02.942412294Z" level=info msg="Forcibly stopping sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\"" Feb 13 19:35:02.942505 containerd[1471]: time="2025-02-13T19:35:02.942479233Z" level=info msg="TearDown network for sandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" successfully" Feb 13 19:35:02.946492 containerd[1471]: time="2025-02-13T19:35:02.946470467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.946578 containerd[1471]: time="2025-02-13T19:35:02.946501677Z" level=info msg="RemovePodSandbox \"f7978f4d3caf80c32d6384364e56674980211f2c5f335c86d6058bfdec2fee2e\" returns successfully" Feb 13 19:35:02.946912 containerd[1471]: time="2025-02-13T19:35:02.946865752Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:35:02.947035 containerd[1471]: time="2025-02-13T19:35:02.947013364Z" level=info msg="TearDown network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" successfully" Feb 13 19:35:02.947035 containerd[1471]: time="2025-02-13T19:35:02.947027551Z" level=info msg="StopPodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" returns successfully" Feb 13 19:35:02.947989 containerd[1471]: time="2025-02-13T19:35:02.947309098Z" level=info msg="RemovePodSandbox for \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:35:02.947989 containerd[1471]: time="2025-02-13T19:35:02.947343333Z" level=info msg="Forcibly stopping sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\"" Feb 13 19:35:02.947989 containerd[1471]: time="2025-02-13T19:35:02.947414238Z" level=info msg="TearDown network for sandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" successfully" Feb 13 19:35:02.951465 containerd[1471]: time="2025-02-13T19:35:02.951428458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.951529 containerd[1471]: time="2025-02-13T19:35:02.951499854Z" level=info msg="RemovePodSandbox \"57d82f82dcc70e58e1433a34e35ff480974978c9bcfafbd8a88124a3d4bb74c2\" returns successfully" Feb 13 19:35:02.951814 containerd[1471]: time="2025-02-13T19:35:02.951779277Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" Feb 13 19:35:02.951941 containerd[1471]: time="2025-02-13T19:35:02.951874018Z" level=info msg="TearDown network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" successfully" Feb 13 19:35:02.951941 containerd[1471]: time="2025-02-13T19:35:02.951883776Z" level=info msg="StopPodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" returns successfully" Feb 13 19:35:02.952172 containerd[1471]: time="2025-02-13T19:35:02.952144053Z" level=info msg="RemovePodSandbox for \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" Feb 13 19:35:02.952172 containerd[1471]: time="2025-02-13T19:35:02.952171375Z" level=info msg="Forcibly stopping sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\"" Feb 13 19:35:02.952282 containerd[1471]: time="2025-02-13T19:35:02.952244234Z" level=info msg="TearDown network for sandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" successfully" Feb 13 19:35:02.955865 containerd[1471]: time="2025-02-13T19:35:02.955836188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.955918 containerd[1471]: time="2025-02-13T19:35:02.955889851Z" level=info msg="RemovePodSandbox \"345c2d0a90e62e6b6da2cd0d7a3cf3b4c8615b6b47c2f70fb43fdef45c00c98f\" returns successfully" Feb 13 19:35:02.956354 containerd[1471]: time="2025-02-13T19:35:02.956189322Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\"" Feb 13 19:35:02.956354 containerd[1471]: time="2025-02-13T19:35:02.956283251Z" level=info msg="TearDown network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" successfully" Feb 13 19:35:02.956354 containerd[1471]: time="2025-02-13T19:35:02.956295084Z" level=info msg="StopPodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" returns successfully" Feb 13 19:35:02.956551 containerd[1471]: time="2025-02-13T19:35:02.956529531Z" level=info msg="RemovePodSandbox for \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\"" Feb 13 19:35:02.956587 containerd[1471]: time="2025-02-13T19:35:02.956557104Z" level=info msg="Forcibly stopping sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\"" Feb 13 19:35:02.956686 containerd[1471]: time="2025-02-13T19:35:02.956639090Z" level=info msg="TearDown network for sandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" successfully" Feb 13 19:35:02.960584 containerd[1471]: time="2025-02-13T19:35:02.960558008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.960661 containerd[1471]: time="2025-02-13T19:35:02.960601681Z" level=info msg="RemovePodSandbox \"d4839a30a8fbd6f329284e6ddb9cc2ab04aa9452fd825eee6553156e626184f7\" returns successfully" Feb 13 19:35:02.960841 containerd[1471]: time="2025-02-13T19:35:02.960807854Z" level=info msg="StopPodSandbox for \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\"" Feb 13 19:35:02.960901 containerd[1471]: time="2025-02-13T19:35:02.960888167Z" level=info msg="TearDown network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" successfully" Feb 13 19:35:02.960924 containerd[1471]: time="2025-02-13T19:35:02.960899239Z" level=info msg="StopPodSandbox for \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" returns successfully" Feb 13 19:35:02.961211 containerd[1471]: time="2025-02-13T19:35:02.961190494Z" level=info msg="RemovePodSandbox for \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\"" Feb 13 19:35:02.961255 containerd[1471]: time="2025-02-13T19:35:02.961213057Z" level=info msg="Forcibly stopping sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\"" Feb 13 19:35:02.961305 containerd[1471]: time="2025-02-13T19:35:02.961274745Z" level=info msg="TearDown network for sandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" successfully" Feb 13 19:35:02.964915 containerd[1471]: time="2025-02-13T19:35:02.964889241Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.965007 containerd[1471]: time="2025-02-13T19:35:02.964924057Z" level=info msg="RemovePodSandbox \"c92443ca87370878a9ab845df4e7daaa7766a7e95a75bab8b94717ca20ae0639\" returns successfully" Feb 13 19:35:02.965182 containerd[1471]: time="2025-02-13T19:35:02.965156652Z" level=info msg="StopPodSandbox for \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\"" Feb 13 19:35:02.965268 containerd[1471]: time="2025-02-13T19:35:02.965236534Z" level=info msg="TearDown network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\" successfully" Feb 13 19:35:02.965268 containerd[1471]: time="2025-02-13T19:35:02.965249008Z" level=info msg="StopPodSandbox for \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\" returns successfully" Feb 13 19:35:02.965641 containerd[1471]: time="2025-02-13T19:35:02.965596782Z" level=info msg="RemovePodSandbox for \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\"" Feb 13 19:35:02.969690 containerd[1471]: time="2025-02-13T19:35:02.969659884Z" level=info msg="Forcibly stopping sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\"" Feb 13 19:35:02.969766 containerd[1471]: time="2025-02-13T19:35:02.969735669Z" level=info msg="TearDown network for sandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\" successfully" Feb 13 19:35:02.973365 containerd[1471]: time="2025-02-13T19:35:02.973320880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.973365 containerd[1471]: time="2025-02-13T19:35:02.973361337Z" level=info msg="RemovePodSandbox \"858152d291a9a1a310f60b77858fdb6e08a56c2f3ec9303b5eace93674f2ef16\" returns successfully" Feb 13 19:35:02.973721 containerd[1471]: time="2025-02-13T19:35:02.973684363Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:35:02.973844 containerd[1471]: time="2025-02-13T19:35:02.973826705Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:35:02.973844 containerd[1471]: time="2025-02-13T19:35:02.973842014Z" level=info msg="StopPodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:35:02.974187 containerd[1471]: time="2025-02-13T19:35:02.974147687Z" level=info msg="RemovePodSandbox for \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:35:02.974187 containerd[1471]: time="2025-02-13T19:35:02.974171672Z" level=info msg="Forcibly stopping sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\"" Feb 13 19:35:02.974350 containerd[1471]: time="2025-02-13T19:35:02.974241697Z" level=info msg="TearDown network for sandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" successfully" Feb 13 19:35:02.977738 containerd[1471]: time="2025-02-13T19:35:02.977689926Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.977738 containerd[1471]: time="2025-02-13T19:35:02.977729301Z" level=info msg="RemovePodSandbox \"4247e6051f4e3e2abe906fef95e20e5b3e5882a4b8e203441fadef632bbf9deb\" returns successfully" Feb 13 19:35:02.978092 containerd[1471]: time="2025-02-13T19:35:02.978066575Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:35:02.978191 containerd[1471]: time="2025-02-13T19:35:02.978173278Z" level=info msg="TearDown network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" successfully" Feb 13 19:35:02.978231 containerd[1471]: time="2025-02-13T19:35:02.978189198Z" level=info msg="StopPodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" returns successfully" Feb 13 19:35:02.978497 containerd[1471]: time="2025-02-13T19:35:02.978473791Z" level=info msg="RemovePodSandbox for \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:35:02.978534 containerd[1471]: time="2025-02-13T19:35:02.978502376Z" level=info msg="Forcibly stopping sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\"" Feb 13 19:35:02.978613 containerd[1471]: time="2025-02-13T19:35:02.978582810Z" level=info msg="TearDown network for sandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" successfully" Feb 13 19:35:02.982222 containerd[1471]: time="2025-02-13T19:35:02.982195893Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.982278 containerd[1471]: time="2025-02-13T19:35:02.982230910Z" level=info msg="RemovePodSandbox \"64fd378561b5462bd9a9a451a83f2f63bc302a1d356185e49501f35817f9f4b7\" returns successfully" Feb 13 19:35:02.982530 containerd[1471]: time="2025-02-13T19:35:02.982499002Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" Feb 13 19:35:02.982669 containerd[1471]: time="2025-02-13T19:35:02.982597419Z" level=info msg="TearDown network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" successfully" Feb 13 19:35:02.982669 containerd[1471]: time="2025-02-13T19:35:02.982611236Z" level=info msg="StopPodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" returns successfully" Feb 13 19:35:02.982856 containerd[1471]: time="2025-02-13T19:35:02.982830364Z" level=info msg="RemovePodSandbox for \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" Feb 13 19:35:02.982900 containerd[1471]: time="2025-02-13T19:35:02.982859430Z" level=info msg="Forcibly stopping sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\"" Feb 13 19:35:02.982989 containerd[1471]: time="2025-02-13T19:35:02.982932388Z" level=info msg="TearDown network for sandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" successfully" Feb 13 19:35:02.988199 containerd[1471]: time="2025-02-13T19:35:02.988159993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.988199 containerd[1471]: time="2025-02-13T19:35:02.988196052Z" level=info msg="RemovePodSandbox \"54caebe45dc792160151fcf140d8c7051ca636a79446a01f5896bd625433a35c\" returns successfully" Feb 13 19:35:02.988510 containerd[1471]: time="2025-02-13T19:35:02.988450868Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\"" Feb 13 19:35:02.988559 containerd[1471]: time="2025-02-13T19:35:02.988528085Z" level=info msg="TearDown network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" successfully" Feb 13 19:35:02.988559 containerd[1471]: time="2025-02-13T19:35:02.988536472Z" level=info msg="StopPodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" returns successfully" Feb 13 19:35:02.989042 containerd[1471]: time="2025-02-13T19:35:02.989008702Z" level=info msg="RemovePodSandbox for \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\"" Feb 13 19:35:02.989088 containerd[1471]: time="2025-02-13T19:35:02.989053378Z" level=info msg="Forcibly stopping sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\"" Feb 13 19:35:02.989226 containerd[1471]: time="2025-02-13T19:35:02.989168427Z" level=info msg="TearDown network for sandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" successfully" Feb 13 19:35:02.993557 containerd[1471]: time="2025-02-13T19:35:02.993522605Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.993630 containerd[1471]: time="2025-02-13T19:35:02.993563433Z" level=info msg="RemovePodSandbox \"7833ac8fa2136c722d2ca862f3f76532a469e1f4b1ac9dffb34c7d5c310ab08e\" returns successfully" Feb 13 19:35:02.993841 containerd[1471]: time="2025-02-13T19:35:02.993819863Z" level=info msg="StopPodSandbox for \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\"" Feb 13 19:35:02.994032 containerd[1471]: time="2025-02-13T19:35:02.994003493Z" level=info msg="TearDown network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" successfully" Feb 13 19:35:02.994032 containerd[1471]: time="2025-02-13T19:35:02.994019723Z" level=info msg="StopPodSandbox for \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" returns successfully" Feb 13 19:35:02.994299 containerd[1471]: time="2025-02-13T19:35:02.994277596Z" level=info msg="RemovePodSandbox for \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\"" Feb 13 19:35:02.994299 containerd[1471]: time="2025-02-13T19:35:02.994299467Z" level=info msg="Forcibly stopping sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\"" Feb 13 19:35:02.994390 containerd[1471]: time="2025-02-13T19:35:02.994363821Z" level=info msg="TearDown network for sandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" successfully" Feb 13 19:35:02.998167 containerd[1471]: time="2025-02-13T19:35:02.998107403Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:02.998237 containerd[1471]: time="2025-02-13T19:35:02.998216151Z" level=info msg="RemovePodSandbox \"a91f030bc7a23a09f6646474e2c71a65b36f8e3df470d28d464359cc5803641c\" returns successfully" Feb 13 19:35:02.998985 containerd[1471]: time="2025-02-13T19:35:02.998780407Z" level=info msg="StopPodSandbox for \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\"" Feb 13 19:35:02.998985 containerd[1471]: time="2025-02-13T19:35:02.998910786Z" level=info msg="TearDown network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\" successfully" Feb 13 19:35:02.998985 containerd[1471]: time="2025-02-13T19:35:02.998927989Z" level=info msg="StopPodSandbox for \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\" returns successfully" Feb 13 19:35:02.999466 containerd[1471]: time="2025-02-13T19:35:02.999441028Z" level=info msg="RemovePodSandbox for \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\"" Feb 13 19:35:02.999505 containerd[1471]: time="2025-02-13T19:35:02.999472438Z" level=info msg="Forcibly stopping sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\"" Feb 13 19:35:02.999619 containerd[1471]: time="2025-02-13T19:35:02.999549915Z" level=info msg="TearDown network for sandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\" successfully" Feb 13 19:35:03.004096 containerd[1471]: time="2025-02-13T19:35:03.004039870Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:35:03.004182 containerd[1471]: time="2025-02-13T19:35:03.004113682Z" level=info msg="RemovePodSandbox \"f63de61b705d714d0a8b201be806c0e5c6e0db7123b13fe53d6b45edc90fb2cc\" returns successfully" Feb 13 19:35:05.704413 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:44998.service - OpenSSH per-connection server daemon (10.0.0.1:44998). Feb 13 19:35:05.748712 sshd[5959]: Accepted publickey for core from 10.0.0.1 port 44998 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:05.750707 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:05.755922 systemd-logind[1449]: New session 17 of user core. Feb 13 19:35:05.763127 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:35:05.893014 sshd[5961]: Connection closed by 10.0.0.1 port 44998 Feb 13 19:35:05.892726 sshd-session[5959]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:05.896826 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:44998.service: Deactivated successfully. Feb 13 19:35:05.899484 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:35:05.900254 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:35:05.901238 systemd-logind[1449]: Removed session 17. Feb 13 19:35:10.905398 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:45006.service - OpenSSH per-connection server daemon (10.0.0.1:45006). Feb 13 19:35:10.960120 sshd[6020]: Accepted publickey for core from 10.0.0.1 port 45006 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:10.962235 sshd-session[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:10.966828 systemd-logind[1449]: New session 18 of user core. Feb 13 19:35:10.976130 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:35:11.187865 sshd[6022]: Connection closed by 10.0.0.1 port 45006 Feb 13 19:35:11.188162 sshd-session[6020]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:11.192729 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:45006.service: Deactivated successfully. Feb 13 19:35:11.194736 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:35:11.195390 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:35:11.196368 systemd-logind[1449]: Removed session 18. Feb 13 19:35:16.200349 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:52096.service - OpenSSH per-connection server daemon (10.0.0.1:52096). Feb 13 19:35:16.253091 sshd[6034]: Accepted publickey for core from 10.0.0.1 port 52096 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:16.255016 sshd-session[6034]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:16.259411 systemd-logind[1449]: New session 19 of user core. Feb 13 19:35:16.266129 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:35:16.383353 sshd[6036]: Connection closed by 10.0.0.1 port 52096 Feb 13 19:35:16.383743 sshd-session[6034]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:16.393616 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:52096.service: Deactivated successfully. Feb 13 19:35:16.395745 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:35:16.397077 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:35:16.407355 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:52110.service - OpenSSH per-connection server daemon (10.0.0.1:52110). Feb 13 19:35:16.408349 systemd-logind[1449]: Removed session 19. Feb 13 19:35:16.452482 sshd[6048]: Accepted publickey for core from 10.0.0.1 port 52110 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:16.455389 sshd-session[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:16.460935 systemd-logind[1449]: New session 20 of user core. Feb 13 19:35:16.467294 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:35:17.315583 sshd[6050]: Connection closed by 10.0.0.1 port 52110 Feb 13 19:35:17.316561 sshd-session[6048]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:17.326163 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:52110.service: Deactivated successfully. Feb 13 19:35:17.328291 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:35:17.330476 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:35:17.338301 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:52118.service - OpenSSH per-connection server daemon (10.0.0.1:52118). Feb 13 19:35:17.340193 systemd-logind[1449]: Removed session 20. Feb 13 19:35:17.384631 sshd[6061]: Accepted publickey for core from 10.0.0.1 port 52118 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:17.386406 sshd-session[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:17.391572 systemd-logind[1449]: New session 21 of user core. Feb 13 19:35:17.401101 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:35:17.645923 kubelet[2607]: E0213 19:35:17.645779 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:18.196538 sshd[6063]: Connection closed by 10.0.0.1 port 52118 Feb 13 19:35:18.199826 sshd-session[6061]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:18.208253 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:52118.service: Deactivated successfully. Feb 13 19:35:18.213022 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:35:18.216382 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:35:18.224204 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:52128.service - OpenSSH per-connection server daemon (10.0.0.1:52128). Feb 13 19:35:18.225360 systemd-logind[1449]: Removed session 21. Feb 13 19:35:18.263134 sshd[6101]: Accepted publickey for core from 10.0.0.1 port 52128 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:18.265238 sshd-session[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:18.269493 systemd-logind[1449]: New session 22 of user core. Feb 13 19:35:18.281077 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:35:18.508663 sshd[6103]: Connection closed by 10.0.0.1 port 52128 Feb 13 19:35:18.509242 sshd-session[6101]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:18.517138 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:52128.service: Deactivated successfully. Feb 13 19:35:18.519575 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:35:18.521903 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:35:18.527239 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:52144.service - OpenSSH per-connection server daemon (10.0.0.1:52144). Feb 13 19:35:18.528040 systemd-logind[1449]: Removed session 22. Feb 13 19:35:18.567879 sshd[6113]: Accepted publickey for core from 10.0.0.1 port 52144 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:18.569434 sshd-session[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:18.573927 systemd-logind[1449]: New session 23 of user core. Feb 13 19:35:18.583209 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:35:18.715087 sshd[6115]: Connection closed by 10.0.0.1 port 52144 Feb 13 19:35:18.715478 sshd-session[6113]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:18.719931 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:52144.service: Deactivated successfully. Feb 13 19:35:18.723029 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:35:18.723778 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:35:18.724785 systemd-logind[1449]: Removed session 23. Feb 13 19:35:19.646215 kubelet[2607]: E0213 19:35:19.646146 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:23.730743 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:52160.service - OpenSSH per-connection server daemon (10.0.0.1:52160). Feb 13 19:35:23.774188 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 52160 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:23.775626 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:23.779679 systemd-logind[1449]: New session 24 of user core. Feb 13 19:35:23.788087 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:35:23.898766 sshd[6138]: Connection closed by 10.0.0.1 port 52160 Feb 13 19:35:23.899177 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:23.903381 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:52160.service: Deactivated successfully. Feb 13 19:35:23.906065 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:35:23.906728 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:35:23.907756 systemd-logind[1449]: Removed session 24. Feb 13 19:35:28.646828 kubelet[2607]: E0213 19:35:28.646762 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:28.917463 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:45434.service - OpenSSH per-connection server daemon (10.0.0.1:45434). Feb 13 19:35:28.987220 sshd[6172]: Accepted publickey for core from 10.0.0.1 port 45434 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:28.989068 sshd-session[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:28.993796 systemd-logind[1449]: New session 25 of user core. Feb 13 19:35:29.002127 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:35:29.129153 sshd[6174]: Connection closed by 10.0.0.1 port 45434 Feb 13 19:35:29.129591 sshd-session[6172]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:29.133639 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:45434.service: Deactivated successfully. Feb 13 19:35:29.135947 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:35:29.136654 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:35:29.137570 systemd-logind[1449]: Removed session 25. Feb 13 19:35:34.147856 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:45436.service - OpenSSH per-connection server daemon (10.0.0.1:45436). Feb 13 19:35:34.190500 sshd[6187]: Accepted publickey for core from 10.0.0.1 port 45436 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:34.192391 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:34.196284 systemd-logind[1449]: New session 26 of user core. Feb 13 19:35:34.200093 systemd[1]: Started session-26.scope - Session 26 of User core. Feb 13 19:35:34.339990 sshd[6189]: Connection closed by 10.0.0.1 port 45436 Feb 13 19:35:34.340364 sshd-session[6187]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:34.344389 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:45436.service: Deactivated successfully. Feb 13 19:35:34.346460 systemd[1]: session-26.scope: Deactivated successfully. Feb 13 19:35:34.347260 systemd-logind[1449]: Session 26 logged out. Waiting for processes to exit. Feb 13 19:35:34.348423 systemd-logind[1449]: Removed session 26. Feb 13 19:35:36.652288 kubelet[2607]: E0213 19:35:36.652239 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:38.247617 kubelet[2607]: E0213 19:35:38.247574 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:35:39.352073 systemd[1]: Started sshd@26-10.0.0.36:22-10.0.0.1:58474.service - OpenSSH per-connection server daemon (10.0.0.1:58474). Feb 13 19:35:39.449683 sshd[6228]: Accepted publickey for core from 10.0.0.1 port 58474 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:35:39.451235 sshd-session[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:35:39.455181 systemd-logind[1449]: New session 27 of user core. Feb 13 19:35:39.462105 systemd[1]: Started session-27.scope - Session 27 of User core. Feb 13 19:35:39.571524 sshd[6230]: Connection closed by 10.0.0.1 port 58474 Feb 13 19:35:39.571859 sshd-session[6228]: pam_unix(sshd:session): session closed for user core Feb 13 19:35:39.575389 systemd[1]: sshd@26-10.0.0.36:22-10.0.0.1:58474.service: Deactivated successfully. Feb 13 19:35:39.577480 systemd[1]: session-27.scope: Deactivated successfully. Feb 13 19:35:39.578250 systemd-logind[1449]: Session 27 logged out. Waiting for processes to exit. Feb 13 19:35:39.579194 systemd-logind[1449]: Removed session 27. Feb 13 19:35:40.649492 kubelet[2607]: E0213 19:35:40.649447 2607 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"