Feb 13 19:41:20.869338 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:41:03 -00 2025 Feb 13 19:41:20.869366 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:41:20.869381 kernel: BIOS-provided physical RAM map: Feb 13 19:41:20.869391 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 19:41:20.869399 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 19:41:20.869408 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 19:41:20.869418 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Feb 13 19:41:20.869427 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Feb 13 19:41:20.869436 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 19:41:20.869448 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 19:41:20.869457 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 19:41:20.869476 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 19:41:20.869485 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Feb 13 19:41:20.869494 kernel: NX (Execute Disable) protection: active Feb 13 19:41:20.869505 kernel: APIC: Static calls initialized Feb 13 19:41:20.869518 kernel: SMBIOS 2.8 present. Feb 13 19:41:20.869527 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Feb 13 19:41:20.869537 kernel: Hypervisor detected: KVM Feb 13 19:41:20.869546 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 19:41:20.869556 kernel: kvm-clock: using sched offset of 2324642355 cycles Feb 13 19:41:20.869565 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 19:41:20.869643 kernel: tsc: Detected 2794.748 MHz processor Feb 13 19:41:20.869656 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:41:20.869666 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:41:20.869676 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Feb 13 19:41:20.869690 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 19:41:20.869700 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:41:20.869710 kernel: Using GB pages for direct mapping Feb 13 19:41:20.869720 kernel: ACPI: Early table checksum verification disabled Feb 13 19:41:20.869730 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Feb 13 19:41:20.869740 kernel: ACPI: RSDT 0x000000009CFE2408 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:20.869750 kernel: ACPI: FACP 0x000000009CFE21E8 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:20.869760 kernel: ACPI: DSDT 0x000000009CFE0040 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:20.869770 kernel: ACPI: FACS 0x000000009CFE0000 000040 Feb 13 19:41:20.869783 kernel: ACPI: APIC 0x000000009CFE22DC 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:20.869794 kernel: ACPI: HPET 0x000000009CFE236C 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:20.869804 kernel: ACPI: MCFG 0x000000009CFE23A4 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:20.869814 kernel: ACPI: WAET 0x000000009CFE23E0 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 19:41:20.869824 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21e8-0x9cfe22db] Feb 13 19:41:20.869835 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21e7] Feb 13 19:41:20.869850 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Feb 13 19:41:20.869863 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22dc-0x9cfe236b] Feb 13 19:41:20.869873 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe236c-0x9cfe23a3] Feb 13 19:41:20.869884 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23a4-0x9cfe23df] Feb 13 19:41:20.869895 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23e0-0x9cfe2407] Feb 13 19:41:20.869905 kernel: No NUMA configuration found Feb 13 19:41:20.869916 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Feb 13 19:41:20.869926 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Feb 13 19:41:20.869940 kernel: Zone ranges: Feb 13 19:41:20.869950 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:41:20.869961 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Feb 13 19:41:20.869971 kernel: Normal empty Feb 13 19:41:20.869982 kernel: Movable zone start for each node Feb 13 19:41:20.869992 kernel: Early memory node ranges Feb 13 19:41:20.870003 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 19:41:20.870013 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Feb 13 19:41:20.870024 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Feb 13 19:41:20.870037 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:41:20.870048 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 19:41:20.870058 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Feb 13 19:41:20.870069 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 19:41:20.870079 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 19:41:20.870090 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:41:20.870101 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 19:41:20.870111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 19:41:20.870121 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 19:41:20.870135 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 19:41:20.870145 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 19:41:20.870156 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:41:20.870166 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 19:41:20.870177 kernel: TSC deadline timer available Feb 13 19:41:20.870187 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Feb 13 19:41:20.870197 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 19:41:20.870207 kernel: kvm-guest: KVM setup pv remote TLB flush Feb 13 19:41:20.870217 kernel: kvm-guest: setup PV sched yield Feb 13 19:41:20.870228 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 19:41:20.870241 kernel: Booting paravirtualized kernel on KVM Feb 13 19:41:20.870252 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:41:20.870262 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Feb 13 19:41:20.870272 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Feb 13 19:41:20.870282 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Feb 13 19:41:20.870292 kernel: pcpu-alloc: [0] 0 1 2 3 Feb 13 19:41:20.870302 kernel: kvm-guest: PV spinlocks enabled Feb 13 19:41:20.870312 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 19:41:20.870323 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:41:20.870338 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:41:20.870348 kernel: random: crng init done Feb 13 19:41:20.870358 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 13 19:41:20.870368 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:41:20.870389 kernel: Fallback order for Node 0: 0 Feb 13 19:41:20.870399 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Feb 13 19:41:20.870417 kernel: Policy zone: DMA32 Feb 13 19:41:20.870436 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:41:20.870459 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2301K rwdata, 22800K rodata, 43320K init, 1752K bss, 138948K reserved, 0K cma-reserved) Feb 13 19:41:20.870486 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Feb 13 19:41:20.870518 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:41:20.870543 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:41:20.870554 kernel: Dynamic Preempt: voluntary Feb 13 19:41:20.870591 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:41:20.870608 kernel: rcu: RCU event tracing is enabled. Feb 13 19:41:20.870619 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Feb 13 19:41:20.870629 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:41:20.870643 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:41:20.870653 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:41:20.870662 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:41:20.870672 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Feb 13 19:41:20.870683 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Feb 13 19:41:20.870693 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 19:41:20.870703 kernel: Console: colour VGA+ 80x25 Feb 13 19:41:20.870712 kernel: printk: console [ttyS0] enabled Feb 13 19:41:20.870723 kernel: ACPI: Core revision 20230628 Feb 13 19:41:20.870736 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 19:41:20.870746 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:41:20.870756 kernel: x2apic enabled Feb 13 19:41:20.870766 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:41:20.870777 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Feb 13 19:41:20.870787 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Feb 13 19:41:20.870798 kernel: kvm-guest: setup PV IPIs Feb 13 19:41:20.870821 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:41:20.870832 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Feb 13 19:41:20.870843 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Feb 13 19:41:20.870853 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 19:41:20.870864 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Feb 13 19:41:20.870878 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Feb 13 19:41:20.870889 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:41:20.870899 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 19:41:20.870910 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:41:20.870923 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 19:41:20.870934 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Feb 13 19:41:20.870945 kernel: RETBleed: Mitigation: untrained return thunk Feb 13 19:41:20.870955 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:41:20.870966 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:41:20.870976 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Feb 13 19:41:20.870987 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Feb 13 19:41:20.870998 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Feb 13 19:41:20.871009 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:41:20.871022 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:41:20.871032 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:41:20.871042 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:41:20.871052 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:41:20.871059 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:41:20.871067 kernel: pid_max: default: 32768 minimum: 301 Feb 13 19:41:20.871075 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:41:20.871083 kernel: landlock: Up and running. Feb 13 19:41:20.871091 kernel: SELinux: Initializing. Feb 13 19:41:20.871101 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:41:20.871109 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 13 19:41:20.871117 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Feb 13 19:41:20.871125 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:20.871133 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:20.871140 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Feb 13 19:41:20.871148 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Feb 13 19:41:20.871156 kernel: ... version: 0 Feb 13 19:41:20.871166 kernel: ... bit width: 48 Feb 13 19:41:20.871174 kernel: ... generic registers: 6 Feb 13 19:41:20.871182 kernel: ... value mask: 0000ffffffffffff Feb 13 19:41:20.871190 kernel: ... max period: 00007fffffffffff Feb 13 19:41:20.871197 kernel: ... fixed-purpose events: 0 Feb 13 19:41:20.871205 kernel: ... event mask: 000000000000003f Feb 13 19:41:20.871213 kernel: signal: max sigframe size: 1776 Feb 13 19:41:20.871221 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:41:20.871229 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:41:20.871236 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:41:20.871246 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:41:20.871254 kernel: .... node #0, CPUs: #1 #2 #3 Feb 13 19:41:20.871262 kernel: smp: Brought up 1 node, 4 CPUs Feb 13 19:41:20.871269 kernel: smpboot: Max logical packages: 1 Feb 13 19:41:20.871277 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Feb 13 19:41:20.871285 kernel: devtmpfs: initialized Feb 13 19:41:20.871293 kernel: x86/mm: Memory block size: 128MB Feb 13 19:41:20.871301 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:41:20.871308 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Feb 13 19:41:20.871319 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:41:20.871327 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:41:20.871337 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:41:20.871345 kernel: audit: type=2000 audit(1739475680.634:1): state=initialized audit_enabled=0 res=1 Feb 13 19:41:20.871355 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:41:20.871363 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:41:20.871370 kernel: cpuidle: using governor menu Feb 13 19:41:20.871378 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:41:20.871386 kernel: dca service started, version 1.12.1 Feb 13 19:41:20.871396 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 19:41:20.871404 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 19:41:20.871412 kernel: PCI: Using configuration type 1 for base access Feb 13 19:41:20.871420 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:41:20.871428 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:41:20.871435 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:41:20.871443 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:41:20.871451 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:41:20.871459 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:41:20.871477 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:41:20.871485 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:41:20.871492 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:41:20.871501 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:41:20.871509 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:41:20.871516 kernel: ACPI: Interpreter enabled Feb 13 19:41:20.871524 kernel: ACPI: PM: (supports S0 S3 S5) Feb 13 19:41:20.871532 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:41:20.871540 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:41:20.871550 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:41:20.871558 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 19:41:20.871566 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 19:41:20.871793 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:41:20.871924 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Feb 13 19:41:20.872149 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Feb 13 19:41:20.872161 kernel: PCI host bridge to bus 0000:00 Feb 13 19:41:20.872303 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:41:20.872465 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 19:41:20.872614 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:41:20.872732 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Feb 13 19:41:20.872842 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:41:20.872954 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Feb 13 19:41:20.873065 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 19:41:20.873211 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 19:41:20.873341 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Feb 13 19:41:20.873463 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Feb 13 19:41:20.873622 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Feb 13 19:41:20.873751 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Feb 13 19:41:20.873926 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:41:20.874093 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Feb 13 19:41:20.874219 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 19:41:20.874341 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Feb 13 19:41:20.874463 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Feb 13 19:41:20.874636 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Feb 13 19:41:20.874764 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 19:41:20.874886 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Feb 13 19:41:20.875073 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Feb 13 19:41:20.875222 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 19:41:20.875346 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Feb 13 19:41:20.875479 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Feb 13 19:41:20.875632 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Feb 13 19:41:20.875762 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Feb 13 19:41:20.875892 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 19:41:20.876021 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 19:41:20.876148 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 19:41:20.876271 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Feb 13 19:41:20.876392 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Feb 13 19:41:20.876531 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 19:41:20.876689 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 19:41:20.876702 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 19:41:20.876714 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 19:41:20.876722 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:41:20.876730 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 19:41:20.876738 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 19:41:20.876746 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 19:41:20.876754 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 19:41:20.876762 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 19:41:20.876770 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 19:41:20.876777 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 19:41:20.876788 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 19:41:20.876795 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 19:41:20.876803 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 19:41:20.876811 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 19:41:20.876819 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 19:41:20.876827 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 19:41:20.876835 kernel: iommu: Default domain type: Translated Feb 13 19:41:20.876842 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:41:20.876850 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:41:20.876860 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:41:20.876868 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 19:41:20.876876 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Feb 13 19:41:20.877000 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 19:41:20.877121 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 19:41:20.877241 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:41:20.877252 kernel: vgaarb: loaded Feb 13 19:41:20.877260 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 19:41:20.877271 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 19:41:20.877279 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 19:41:20.877287 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:41:20.877295 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:41:20.877303 kernel: pnp: PnP ACPI init Feb 13 19:41:20.877438 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 19:41:20.877450 kernel: pnp: PnP ACPI: found 6 devices Feb 13 19:41:20.877458 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:41:20.877478 kernel: NET: Registered PF_INET protocol family Feb 13 19:41:20.877486 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 13 19:41:20.877494 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 13 19:41:20.877502 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:41:20.877510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:41:20.877518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Feb 13 19:41:20.877526 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 13 19:41:20.877534 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:41:20.877542 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 13 19:41:20.877552 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:41:20.877560 kernel: NET: Registered PF_XDP protocol family Feb 13 19:41:20.877720 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 19:41:20.877836 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 19:41:20.877947 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 19:41:20.878058 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Feb 13 19:41:20.878169 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 19:41:20.878280 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Feb 13 19:41:20.878296 kernel: PCI: CLS 0 bytes, default 64 Feb 13 19:41:20.878304 kernel: Initialise system trusted keyrings Feb 13 19:41:20.878312 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 13 19:41:20.878320 kernel: Key type asymmetric registered Feb 13 19:41:20.878328 kernel: Asymmetric key parser 'x509' registered Feb 13 19:41:20.878336 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:41:20.878343 kernel: io scheduler mq-deadline registered Feb 13 19:41:20.878351 kernel: io scheduler kyber registered Feb 13 19:41:20.878359 kernel: io scheduler bfq registered Feb 13 19:41:20.878369 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:41:20.878378 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 19:41:20.878386 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 19:41:20.878394 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 19:41:20.878401 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:41:20.878409 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:41:20.878417 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 19:41:20.878425 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:41:20.878433 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:41:20.878574 kernel: rtc_cmos 00:04: RTC can wake from S4 Feb 13 19:41:20.878687 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:41:20.878810 kernel: rtc_cmos 00:04: registered as rtc0 Feb 13 19:41:20.878924 kernel: rtc_cmos 00:04: setting system clock to 2025-02-13T19:41:20 UTC (1739475680) Feb 13 19:41:20.879037 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Feb 13 19:41:20.879048 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Feb 13 19:41:20.879055 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:41:20.879063 kernel: Segment Routing with IPv6 Feb 13 19:41:20.879076 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:41:20.879084 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:41:20.879091 kernel: Key type dns_resolver registered Feb 13 19:41:20.879099 kernel: IPI shorthand broadcast: enabled Feb 13 19:41:20.879107 kernel: sched_clock: Marking stable (577002221, 105323191)->(699270762, -16945350) Feb 13 19:41:20.879115 kernel: registered taskstats version 1 Feb 13 19:41:20.879123 kernel: Loading compiled-in X.509 certificates Feb 13 19:41:20.879131 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: b3acedbed401b3cd9632ee9302ddcce254d8924d' Feb 13 19:41:20.879138 kernel: Key type .fscrypt registered Feb 13 19:41:20.879148 kernel: Key type fscrypt-provisioning registered Feb 13 19:41:20.879156 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:41:20.879164 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:41:20.879172 kernel: ima: No architecture policies found Feb 13 19:41:20.879180 kernel: clk: Disabling unused clocks Feb 13 19:41:20.879187 kernel: Freeing unused kernel image (initmem) memory: 43320K Feb 13 19:41:20.879195 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:41:20.879203 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Feb 13 19:41:20.879211 kernel: Run /init as init process Feb 13 19:41:20.879221 kernel: with arguments: Feb 13 19:41:20.879229 kernel: /init Feb 13 19:41:20.879236 kernel: with environment: Feb 13 19:41:20.879244 kernel: HOME=/ Feb 13 19:41:20.879251 kernel: TERM=linux Feb 13 19:41:20.879259 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:41:20.879269 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:41:20.879279 systemd[1]: Detected virtualization kvm. Feb 13 19:41:20.879290 systemd[1]: Detected architecture x86-64. Feb 13 19:41:20.879298 systemd[1]: Running in initrd. Feb 13 19:41:20.879307 systemd[1]: No hostname configured, using default hostname. Feb 13 19:41:20.879315 systemd[1]: Hostname set to . Feb 13 19:41:20.879324 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:41:20.879334 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:41:20.879343 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:41:20.879354 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:41:20.879366 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:41:20.879386 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:41:20.879397 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:41:20.879406 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:41:20.879417 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:41:20.879428 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:41:20.879437 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:41:20.879446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:41:20.879454 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:41:20.879463 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:41:20.879480 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:41:20.879489 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:41:20.879498 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:41:20.879509 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:41:20.879518 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:41:20.879527 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 19:41:20.879536 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:41:20.879545 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:41:20.879554 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:41:20.879562 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:41:20.879571 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:41:20.879595 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:41:20.879610 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:41:20.879621 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:41:20.879630 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:41:20.879639 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:41:20.879647 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:20.879656 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:41:20.879665 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:41:20.879673 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:41:20.879703 systemd-journald[194]: Collecting audit messages is disabled. Feb 13 19:41:20.879725 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:41:20.879736 systemd-journald[194]: Journal started Feb 13 19:41:20.879757 systemd-journald[194]: Runtime Journal (/run/log/journal/65be40ae87954519a069bba16229acfb) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:41:20.872432 systemd-modules-load[195]: Inserted module 'overlay' Feb 13 19:41:20.909038 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:41:20.909059 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:41:20.909074 kernel: Bridge firewalling registered Feb 13 19:41:20.902015 systemd-modules-load[195]: Inserted module 'br_netfilter' Feb 13 19:41:20.908043 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:41:20.909748 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:20.921732 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:41:20.923022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:41:20.928199 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:41:20.934273 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:20.937245 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:41:20.941973 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:41:20.945166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:41:20.947786 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:41:20.951856 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:20.955861 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:41:20.958319 dracut-cmdline[221]: dracut-dracut-053 Feb 13 19:41:20.959296 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=015d1d9e5e601f6a4e226c935072d3d0819e7eb2da20e68715973498f21aa3fe Feb 13 19:41:20.960939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:41:21.014869 systemd-resolved[235]: Positive Trust Anchors: Feb 13 19:41:21.014888 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:41:21.014925 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:41:21.017759 systemd-resolved[235]: Defaulting to hostname 'linux'. Feb 13 19:41:21.018750 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:41:21.023988 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:41:21.049600 kernel: SCSI subsystem initialized Feb 13 19:41:21.058598 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:41:21.068605 kernel: iscsi: registered transport (tcp) Feb 13 19:41:21.088668 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:41:21.088700 kernel: QLogic iSCSI HBA Driver Feb 13 19:41:21.129505 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:41:21.139704 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:41:21.170613 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:41:21.170666 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:41:21.172191 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:41:21.213610 kernel: raid6: avx2x4 gen() 30583 MB/s Feb 13 19:41:21.230602 kernel: raid6: avx2x2 gen() 26720 MB/s Feb 13 19:41:21.247758 kernel: raid6: avx2x1 gen() 18156 MB/s Feb 13 19:41:21.247783 kernel: raid6: using algorithm avx2x4 gen() 30583 MB/s Feb 13 19:41:21.265910 kernel: raid6: .... xor() 7444 MB/s, rmw enabled Feb 13 19:41:21.265941 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:41:21.289613 kernel: xor: automatically using best checksumming function avx Feb 13 19:41:21.441611 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:41:21.452570 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:41:21.462722 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:41:21.476291 systemd-udevd[412]: Using default interface naming scheme 'v255'. Feb 13 19:41:21.480824 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:41:21.487697 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:41:21.502662 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Feb 13 19:41:21.530816 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:41:21.541766 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:41:21.602314 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:41:21.615094 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:41:21.626597 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:41:21.629864 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:41:21.631218 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:41:21.633603 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:41:21.647637 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:41:21.647738 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:41:21.655634 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Feb 13 19:41:21.677305 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Feb 13 19:41:21.677511 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:41:21.677528 kernel: AES CTR mode by8 optimization enabled Feb 13 19:41:21.677542 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 19:41:21.677557 kernel: GPT:9289727 != 19775487 Feb 13 19:41:21.677571 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 19:41:21.677601 kernel: GPT:9289727 != 19775487 Feb 13 19:41:21.677615 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 19:41:21.677629 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:41:21.664935 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:41:21.682629 kernel: libata version 3.00 loaded. Feb 13 19:41:21.682710 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:41:21.682829 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:21.685072 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:41:21.685415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:41:21.685544 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:21.691994 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:21.697718 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 19:41:21.728363 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 19:41:21.728383 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 19:41:21.728546 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 19:41:21.728993 kernel: scsi host0: ahci Feb 13 19:41:21.729154 kernel: scsi host1: ahci Feb 13 19:41:21.729298 kernel: scsi host2: ahci Feb 13 19:41:21.729441 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (463) Feb 13 19:41:21.729463 kernel: scsi host3: ahci Feb 13 19:41:21.729626 kernel: BTRFS: device fsid c7adc9b8-df7f-4a5f-93bf-204def2767a9 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (465) Feb 13 19:41:21.729642 kernel: scsi host4: ahci Feb 13 19:41:21.730434 kernel: scsi host5: ahci Feb 13 19:41:21.730915 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Feb 13 19:41:21.730933 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Feb 13 19:41:21.730961 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Feb 13 19:41:21.730975 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Feb 13 19:41:21.730989 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Feb 13 19:41:21.731003 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Feb 13 19:41:21.703864 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:21.736602 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 19:41:21.768749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:41:21.771530 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:21.777095 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 19:41:21.781155 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 19:41:21.781835 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 19:41:21.798708 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:41:21.799900 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:41:21.815752 disk-uuid[570]: Primary Header is updated. Feb 13 19:41:21.815752 disk-uuid[570]: Secondary Entries is updated. Feb 13 19:41:21.815752 disk-uuid[570]: Secondary Header is updated. Feb 13 19:41:21.819011 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:41:21.820168 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:22.042015 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Feb 13 19:41:22.042101 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:22.042113 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:22.043621 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:22.043702 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:22.044618 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Feb 13 19:41:22.045603 kernel: ata3.00: applying bridge limits Feb 13 19:41:22.045618 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 19:41:22.046613 kernel: ata3.00: configured for UDMA/100 Feb 13 19:41:22.047609 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Feb 13 19:41:22.092607 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Feb 13 19:41:22.106422 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:41:22.106450 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:41:22.832501 disk-uuid[579]: The operation has completed successfully. Feb 13 19:41:22.834350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 19:41:22.864234 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:41:22.864357 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:41:22.889735 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:41:22.892858 sh[594]: Success Feb 13 19:41:22.905616 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Feb 13 19:41:22.939770 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:41:22.955115 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:41:22.958158 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:41:22.968988 kernel: BTRFS info (device dm-0): first mount of filesystem c7adc9b8-df7f-4a5f-93bf-204def2767a9 Feb 13 19:41:22.969018 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:22.969029 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:41:22.970983 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:41:22.971005 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:41:22.975649 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:41:22.978123 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 19:41:22.990742 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:41:22.993620 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:41:23.003444 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:23.003477 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:23.003492 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:41:23.006627 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:41:23.015657 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:41:23.017974 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:23.026265 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:41:23.037773 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:41:23.092843 ignition[686]: Ignition 2.20.0 Feb 13 19:41:23.093650 ignition[686]: Stage: fetch-offline Feb 13 19:41:23.093718 ignition[686]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:23.093731 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:23.093861 ignition[686]: parsed url from cmdline: "" Feb 13 19:41:23.093866 ignition[686]: no config URL provided Feb 13 19:41:23.093873 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:41:23.093885 ignition[686]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:41:23.093924 ignition[686]: op(1): [started] loading QEMU firmware config module Feb 13 19:41:23.093931 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Feb 13 19:41:23.102680 ignition[686]: op(1): [finished] loading QEMU firmware config module Feb 13 19:41:23.118219 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:41:23.129802 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:41:23.146319 ignition[686]: parsing config with SHA512: 3073a2b62b4c4ba95a56b98e0d9e4341daa74e60ada850bced012b6ccde04a314012bff5b37e64d6342aff66c6bbad4ba227e8332f0daae596572da0bdb409df Feb 13 19:41:23.149905 unknown[686]: fetched base config from "system" Feb 13 19:41:23.149918 unknown[686]: fetched user config from "qemu" Feb 13 19:41:23.150234 ignition[686]: fetch-offline: fetch-offline passed Feb 13 19:41:23.153155 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:41:23.150298 ignition[686]: Ignition finished successfully Feb 13 19:41:23.153959 systemd-networkd[783]: lo: Link UP Feb 13 19:41:23.153963 systemd-networkd[783]: lo: Gained carrier Feb 13 19:41:23.155497 systemd-networkd[783]: Enumeration completed Feb 13 19:41:23.155655 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:41:23.155883 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:23.155888 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:41:23.156706 systemd-networkd[783]: eth0: Link UP Feb 13 19:41:23.156709 systemd-networkd[783]: eth0: Gained carrier Feb 13 19:41:23.156716 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:23.157691 systemd[1]: Reached target network.target - Network. Feb 13 19:41:23.158608 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:41:23.164716 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:41:23.176894 ignition[787]: Ignition 2.20.0 Feb 13 19:41:23.171632 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:41:23.176900 ignition[787]: Stage: kargs Feb 13 19:41:23.181206 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:41:23.177041 ignition[787]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:23.177051 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:23.177849 ignition[787]: kargs: kargs passed Feb 13 19:41:23.177885 ignition[787]: Ignition finished successfully Feb 13 19:41:23.190794 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:41:23.202529 ignition[796]: Ignition 2.20.0 Feb 13 19:41:23.202541 ignition[796]: Stage: disks Feb 13 19:41:23.202705 ignition[796]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:23.202717 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:23.203467 ignition[796]: disks: disks passed Feb 13 19:41:23.205782 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:41:23.203513 ignition[796]: Ignition finished successfully Feb 13 19:41:23.207205 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:41:23.208730 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:41:23.210865 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:41:23.211887 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:41:23.213639 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:41:23.223714 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:41:23.235825 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 19:41:23.242533 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:41:23.256739 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:41:23.357606 kernel: EXT4-fs (vda9): mounted filesystem 7d46b70d-4c30-46e6-9935-e1f7fb523560 r/w with ordered data mode. Quota mode: none. Feb 13 19:41:23.358105 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:41:23.360275 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:41:23.376662 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:41:23.378345 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:41:23.379573 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:41:23.379623 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:41:23.390508 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (814) Feb 13 19:41:23.390525 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:23.390536 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:23.390547 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:41:23.379644 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:41:23.393739 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:41:23.385920 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:41:23.391271 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:41:23.395243 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:41:23.424800 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:41:23.429953 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:41:23.433572 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:41:23.438439 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:41:23.520892 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:41:23.533667 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:41:23.540037 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:41:23.542947 kernel: BTRFS info (device vda6): last unmount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:23.573520 ignition[927]: INFO : Ignition 2.20.0 Feb 13 19:41:23.573520 ignition[927]: INFO : Stage: mount Feb 13 19:41:23.575254 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:23.575254 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:23.575254 ignition[927]: INFO : mount: mount passed Feb 13 19:41:23.575254 ignition[927]: INFO : Ignition finished successfully Feb 13 19:41:23.576497 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:41:23.582693 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:41:23.584789 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:41:23.968277 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:41:23.985713 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:41:23.993015 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (940) Feb 13 19:41:23.993043 kernel: BTRFS info (device vda6): first mount of filesystem 60a376b4-1193-4e0b-af89-a0e6d698bf0f Feb 13 19:41:23.993054 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:41:23.994589 kernel: BTRFS info (device vda6): using free space tree Feb 13 19:41:23.997599 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 19:41:23.998711 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:41:24.018768 ignition[957]: INFO : Ignition 2.20.0 Feb 13 19:41:24.018768 ignition[957]: INFO : Stage: files Feb 13 19:41:24.020591 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:24.020591 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:24.020591 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:41:24.024930 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:41:24.024930 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:41:24.029411 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:41:24.031184 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:41:24.032770 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:41:24.031772 unknown[957]: wrote ssh authorized keys file for user: core Feb 13 19:41:24.035705 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:41:24.035705 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 13 19:41:24.203424 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 13 19:41:24.418763 systemd-networkd[783]: eth0: Gained IPv6LL Feb 13 19:41:24.526567 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 13 19:41:24.526567 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:41:24.530696 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Feb 13 19:41:24.733811 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 13 19:41:25.000933 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Feb 13 19:41:25.000933 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Feb 13 19:41:25.004788 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:41:25.006992 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 13 19:41:25.006992 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Feb 13 19:41:25.006992 ignition[957]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Feb 13 19:41:25.006992 ignition[957]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:41:25.006992 ignition[957]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:41:25.006992 ignition[957]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Feb 13 19:41:25.006992 ignition[957]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:41:25.036597 ignition[957]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:41:25.040624 ignition[957]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:41:25.042344 ignition[957]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:41:25.042344 ignition[957]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Feb 13 19:41:25.045170 ignition[957]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Feb 13 19:41:25.046606 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:41:25.048437 ignition[957]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:41:25.050132 ignition[957]: INFO : files: files passed Feb 13 19:41:25.050132 ignition[957]: INFO : Ignition finished successfully Feb 13 19:41:25.052926 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:41:25.060875 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:41:25.063981 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:41:25.066795 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:41:25.068611 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:41:25.073794 initrd-setup-root-after-ignition[985]: grep: /sysroot/oem/oem-release: No such file or directory Feb 13 19:41:25.077706 initrd-setup-root-after-ignition[987]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:41:25.077706 initrd-setup-root-after-ignition[987]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:41:25.081460 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:41:25.082860 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:41:25.083658 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:41:25.096725 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:41:25.120777 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:41:25.120903 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:41:25.123211 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:41:25.125316 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:41:25.127335 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:41:25.135692 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:41:25.149770 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:41:25.162710 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:41:25.171906 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:41:25.173154 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:41:25.175392 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:41:25.177466 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:41:25.177570 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:41:25.179706 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:41:25.181397 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:41:25.183401 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:41:25.185416 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:41:25.187405 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:41:25.189553 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:41:25.191682 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:41:25.193932 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:41:25.195913 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:41:25.198130 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:41:25.199904 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:41:25.200007 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:41:25.202125 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:41:25.203733 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:41:25.205784 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:41:25.205901 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:41:25.207996 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:41:25.208102 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:41:25.210254 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:41:25.210355 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:41:25.212341 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:41:25.214122 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:41:25.217659 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:41:25.219689 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:41:25.221659 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:41:25.223384 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:41:25.223477 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:41:25.225380 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:41:25.225470 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:41:25.227800 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:41:25.227909 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:41:25.229804 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:41:25.229910 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:41:25.241721 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:41:25.242803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:41:25.242912 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:41:25.246684 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:41:25.249036 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:41:25.249169 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:41:25.251707 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:41:25.251811 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:41:25.258019 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:41:25.261051 ignition[1011]: INFO : Ignition 2.20.0 Feb 13 19:41:25.261051 ignition[1011]: INFO : Stage: umount Feb 13 19:41:25.261051 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:41:25.261051 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Feb 13 19:41:25.261051 ignition[1011]: INFO : umount: umount passed Feb 13 19:41:25.261051 ignition[1011]: INFO : Ignition finished successfully Feb 13 19:41:25.258130 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:41:25.262228 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:41:25.262395 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:41:25.264782 systemd[1]: Stopped target network.target - Network. Feb 13 19:41:25.266148 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:41:25.266223 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:41:25.268525 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:41:25.268617 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:41:25.270436 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:41:25.270496 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:41:25.272433 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:41:25.272495 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:41:25.275238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:41:25.277451 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:41:25.280613 systemd-networkd[783]: eth0: DHCPv6 lease lost Feb 13 19:41:25.280806 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:41:25.282960 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:41:25.283113 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:41:25.285712 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:41:25.285764 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:41:25.292694 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:41:25.293776 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:41:25.293839 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:41:25.296406 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:41:25.298755 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:41:25.298888 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:41:25.318973 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:41:25.320035 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:41:25.323134 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:41:25.324251 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:41:25.328332 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:41:25.329421 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:41:25.331666 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:41:25.331720 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:41:25.334915 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:41:25.335964 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:41:25.338334 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:41:25.339411 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:41:25.341726 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:41:25.342872 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:41:25.361697 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:41:25.363860 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:41:25.363911 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:25.366775 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:41:25.367739 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:41:25.369814 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:41:25.370840 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:41:25.373237 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:41:25.374246 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:41:25.376658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:41:25.377652 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:25.380189 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:41:25.381266 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:41:25.425561 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:41:25.426648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:41:25.429163 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:41:25.431346 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:41:25.432399 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:41:25.445733 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:41:25.454957 systemd[1]: Switching root. Feb 13 19:41:25.492743 systemd-journald[194]: Journal stopped Feb 13 19:41:26.585205 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Feb 13 19:41:26.585277 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:41:26.585291 kernel: SELinux: policy capability open_perms=1 Feb 13 19:41:26.585306 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:41:26.585317 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:41:26.585328 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:41:26.585349 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:41:26.585360 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:41:26.585385 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:41:26.585404 kernel: audit: type=1403 audit(1739475685.838:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:41:26.585421 systemd[1]: Successfully loaded SELinux policy in 44.906ms. Feb 13 19:41:26.585442 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.172ms. Feb 13 19:41:26.585458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 19:41:26.585470 systemd[1]: Detected virtualization kvm. Feb 13 19:41:26.585483 systemd[1]: Detected architecture x86-64. Feb 13 19:41:26.585494 systemd[1]: Detected first boot. Feb 13 19:41:26.585511 systemd[1]: Initializing machine ID from VM UUID. Feb 13 19:41:26.585523 zram_generator::config[1059]: No configuration found. Feb 13 19:41:26.585537 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:41:26.585549 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:41:26.585565 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:41:26.585589 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:41:26.585603 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:41:26.585615 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:41:26.585627 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:41:26.585639 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:41:26.585656 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:41:26.585669 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:41:26.585683 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:41:26.585696 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:41:26.585708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:41:26.585721 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:41:26.585733 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:41:26.585746 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:41:26.585758 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:41:26.585771 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:41:26.585783 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:41:26.585798 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:41:26.585810 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:41:26.585822 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:41:26.585835 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:41:26.585849 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:41:26.585861 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:41:26.585873 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:41:26.585885 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:41:26.585900 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:41:26.585912 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:41:26.585924 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:41:26.585937 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:41:26.585949 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:41:26.585961 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:41:26.585974 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:41:26.585986 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:41:26.585998 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:41:26.586012 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:41:26.586026 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:26.586038 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:41:26.586050 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:41:26.586063 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:41:26.586076 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:41:26.586088 systemd[1]: Reached target machines.target - Containers. Feb 13 19:41:26.586100 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:41:26.586113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:26.586132 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:41:26.586144 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:41:26.586165 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:41:26.586177 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:41:26.586189 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:41:26.586202 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:41:26.586215 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:41:26.586227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:41:26.586244 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:41:26.586257 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:41:26.586270 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:41:26.586282 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:41:26.586295 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:41:26.586307 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:41:26.586319 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:41:26.586331 kernel: loop: module loaded Feb 13 19:41:26.586350 kernel: fuse: init (API version 7.39) Feb 13 19:41:26.586364 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:41:26.586377 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:41:26.586389 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:41:26.586403 systemd[1]: Stopped verity-setup.service. Feb 13 19:41:26.586415 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:26.586428 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:41:26.586440 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:41:26.586469 systemd-journald[1132]: Collecting audit messages is disabled. Feb 13 19:41:26.586493 kernel: ACPI: bus type drm_connector registered Feb 13 19:41:26.586505 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:41:26.586518 systemd-journald[1132]: Journal started Feb 13 19:41:26.586540 systemd-journald[1132]: Runtime Journal (/run/log/journal/65be40ae87954519a069bba16229acfb) is 6.0M, max 48.3M, 42.3M free. Feb 13 19:41:26.356319 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:41:26.378052 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 19:41:26.378482 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:41:26.589028 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:41:26.589847 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:41:26.591066 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:41:26.592302 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:41:26.593846 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:41:26.595322 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:41:26.596892 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:41:26.597070 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:41:26.598571 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:41:26.598765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:41:26.600378 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:41:26.600550 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:41:26.601934 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:41:26.602104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:41:26.603654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:41:26.603824 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:41:26.605351 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:41:26.605517 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:41:26.606917 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:41:26.608308 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:41:26.609928 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:41:26.625684 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:41:26.632649 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:41:26.634866 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:41:26.635995 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:41:26.636023 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:41:26.638009 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 19:41:26.640798 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:41:26.643344 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:41:26.644500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:26.646777 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:41:26.648717 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:41:26.650023 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:41:26.651865 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:41:26.653517 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:41:26.654795 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:41:26.657802 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:41:26.664204 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:41:26.666920 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:41:26.668353 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:41:26.670071 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:41:26.675175 systemd-journald[1132]: Time spent on flushing to /var/log/journal/65be40ae87954519a069bba16229acfb is 38.634ms for 952 entries. Feb 13 19:41:26.675175 systemd-journald[1132]: System Journal (/var/log/journal/65be40ae87954519a069bba16229acfb) is 8.0M, max 195.6M, 187.6M free. Feb 13 19:41:26.735284 systemd-journald[1132]: Received client request to flush runtime journal. Feb 13 19:41:26.735358 kernel: loop0: detected capacity change from 0 to 138184 Feb 13 19:41:26.735394 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:41:26.675238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:41:26.677992 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:41:26.681422 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:41:26.690782 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 19:41:26.696754 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:41:26.708836 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:41:26.722690 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 19:41:26.742791 kernel: loop1: detected capacity change from 0 to 141000 Feb 13 19:41:26.727832 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:41:26.729431 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:41:26.732939 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 19:41:26.742156 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:41:26.744903 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:41:26.768886 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 19:41:26.768906 systemd-tmpfiles[1190]: ACLs are not supported, ignoring. Feb 13 19:41:26.775678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:41:26.789391 kernel: loop2: detected capacity change from 0 to 210664 Feb 13 19:41:26.820606 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 19:41:26.834602 kernel: loop4: detected capacity change from 0 to 141000 Feb 13 19:41:26.851599 kernel: loop5: detected capacity change from 0 to 210664 Feb 13 19:41:26.858452 (sd-merge)[1197]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Feb 13 19:41:26.859065 (sd-merge)[1197]: Merged extensions into '/usr'. Feb 13 19:41:26.864122 systemd[1]: Reloading requested from client PID 1172 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:41:26.864140 systemd[1]: Reloading... Feb 13 19:41:27.038444 zram_generator::config[1225]: No configuration found. Feb 13 19:41:27.157193 ldconfig[1167]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:41:27.225448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:41:27.289933 systemd[1]: Reloading finished in 425 ms. Feb 13 19:41:27.361679 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:41:27.363432 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:41:27.383815 systemd[1]: Starting ensure-sysext.service... Feb 13 19:41:27.386190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:41:27.394349 systemd[1]: Reloading requested from client PID 1260 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:41:27.394366 systemd[1]: Reloading... Feb 13 19:41:27.431285 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:41:27.431673 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:41:27.432882 systemd-tmpfiles[1261]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:41:27.433296 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 19:41:27.433418 systemd-tmpfiles[1261]: ACLs are not supported, ignoring. Feb 13 19:41:27.444401 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:41:27.444416 systemd-tmpfiles[1261]: Skipping /boot Feb 13 19:41:27.465615 systemd-tmpfiles[1261]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:41:27.465633 systemd-tmpfiles[1261]: Skipping /boot Feb 13 19:41:27.470607 zram_generator::config[1288]: No configuration found. Feb 13 19:41:27.599458 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:41:27.649436 systemd[1]: Reloading finished in 254 ms. Feb 13 19:41:27.670209 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:41:27.682056 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:41:27.691184 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:41:27.693619 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:41:27.696191 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:41:27.700786 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:41:27.704863 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:41:27.708773 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:41:27.712924 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:27.713098 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:27.720883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:41:27.727872 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:41:27.732022 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:41:27.733500 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:27.735834 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:41:27.737245 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:27.738888 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:41:27.741513 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:41:27.742874 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:41:27.744739 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Feb 13 19:41:27.744995 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:41:27.745170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:41:27.747523 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:41:27.747766 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:41:27.759817 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:41:27.762787 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:27.763179 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:27.770838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:41:27.774183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:41:27.777289 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:41:27.778791 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:27.780255 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:41:27.782703 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:27.783476 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:41:27.785566 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:41:27.785790 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:41:27.790747 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:41:27.793238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:27.795840 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 19:41:27.803449 augenrules[1387]: No rules Feb 13 19:41:27.805866 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:41:27.808895 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:41:27.810806 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:41:27.814781 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:41:27.816743 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:41:27.817897 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:41:27.818694 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:41:27.821890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:41:27.823627 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:41:27.823817 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:41:27.826094 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:41:27.826283 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:41:27.828176 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:41:27.830206 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:41:27.836090 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:41:27.838475 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:41:27.839962 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:41:27.842057 systemd[1]: Finished ensure-sysext.service. Feb 13 19:41:27.904201 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:41:27.908937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:41:27.908999 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:41:27.912780 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:41:27.915655 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:41:27.944287 systemd-resolved[1330]: Positive Trust Anchors: Feb 13 19:41:27.946549 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1390) Feb 13 19:41:27.945243 systemd-resolved[1330]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:41:27.945278 systemd-resolved[1330]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:41:27.949158 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 19:41:27.951290 systemd-resolved[1330]: Defaulting to hostname 'linux'. Feb 13 19:41:27.974750 systemd-networkd[1396]: lo: Link UP Feb 13 19:41:27.974765 systemd-networkd[1396]: lo: Gained carrier Feb 13 19:41:27.976345 systemd-networkd[1396]: Enumeration completed Feb 13 19:41:28.024773 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:41:28.026203 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:41:28.033594 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 19:41:28.034447 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:41:28.046036 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:28.046045 systemd-networkd[1396]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 19:41:28.046899 systemd-networkd[1396]: eth0: Link UP Feb 13 19:41:28.046903 systemd-networkd[1396]: eth0: Gained carrier Feb 13 19:41:28.046924 systemd-networkd[1396]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 19:41:28.066661 systemd-networkd[1396]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1 Feb 13 19:41:28.067621 systemd-timesyncd[1414]: Network configuration changed, trying to establish connection. Feb 13 19:41:28.781569 systemd-resolved[1330]: Clock change detected. Flushing caches. Feb 13 19:41:28.781660 systemd-timesyncd[1414]: Contacted time server 10.0.0.1:123 (10.0.0.1). Feb 13 19:41:28.781705 systemd-timesyncd[1414]: Initial clock synchronization to Thu 2025-02-13 19:41:28.781531 UTC. Feb 13 19:41:28.795480 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:41:28.795559 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 19:41:28.806701 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 19:41:28.806875 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 19:41:28.807010 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Feb 13 19:41:28.798186 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:41:28.803768 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:41:28.819962 systemd[1]: Reached target network.target - Network. Feb 13 19:41:28.821017 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:41:28.822293 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:41:28.852540 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:41:28.853088 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:41:28.858641 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:41:28.865657 kernel: kvm_amd: TSC scaling supported Feb 13 19:41:28.865691 kernel: kvm_amd: Nested Virtualization enabled Feb 13 19:41:28.865705 kernel: kvm_amd: Nested Paging enabled Feb 13 19:41:28.866630 kernel: kvm_amd: LBR virtualization supported Feb 13 19:41:28.866652 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Feb 13 19:41:28.867624 kernel: kvm_amd: Virtual GIF supported Feb 13 19:41:28.890557 kernel: EDAC MC: Ver: 3.0.0 Feb 13 19:41:28.972107 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:41:28.984778 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:41:29.019788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:41:29.029386 lvm[1432]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:41:29.061102 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:41:29.062728 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:41:29.063858 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:41:29.065062 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:41:29.066363 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:41:29.067801 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:41:29.069004 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:41:29.070264 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:41:29.071528 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:41:29.071560 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:41:29.072491 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:41:29.074375 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:41:29.077127 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:41:29.087255 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:41:29.089781 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:41:29.091765 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:41:29.092959 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:41:29.093937 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:41:29.094941 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:41:29.094969 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:41:29.096056 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:41:29.098249 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:41:29.102608 lvm[1437]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:41:29.102659 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:41:29.106908 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:41:29.109584 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:41:29.110845 jq[1440]: false Feb 13 19:41:29.121702 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:41:29.122303 dbus-daemon[1439]: [system] SELinux support is enabled Feb 13 19:41:29.124046 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Feb 13 19:41:29.127301 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:41:29.131071 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:41:29.136141 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:41:29.137613 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:41:29.138066 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:41:29.139297 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:41:29.139715 extend-filesystems[1441]: Found loop3 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found loop4 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found loop5 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found sr0 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda1 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda2 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda3 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found usr Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda4 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda6 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda7 Feb 13 19:41:29.142283 extend-filesystems[1441]: Found vda9 Feb 13 19:41:29.142283 extend-filesystems[1441]: Checking size of /dev/vda9 Feb 13 19:41:29.143633 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:41:29.144700 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:41:29.148875 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:41:29.161647 jq[1455]: true Feb 13 19:41:29.152157 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:41:29.152927 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:41:29.153297 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:41:29.155561 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:41:29.162253 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:41:29.162584 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:41:29.181634 jq[1462]: true Feb 13 19:41:29.187121 extend-filesystems[1441]: Resized partition /dev/vda9 Feb 13 19:41:29.190455 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1378) Feb 13 19:41:29.196652 (ntainerd)[1467]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:41:29.201762 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) Feb 13 19:41:29.203938 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:41:29.205848 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:41:29.205876 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:41:29.208072 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:41:29.208096 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:41:29.210991 tar[1458]: linux-amd64/helm Feb 13 19:41:29.218633 update_engine[1453]: I20250213 19:41:29.216465 1453 main.cc:92] Flatcar Update Engine starting Feb 13 19:41:29.221622 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Feb 13 19:41:29.225019 update_engine[1453]: I20250213 19:41:29.224915 1453 update_check_scheduler.cc:74] Next update check in 5m7s Feb 13 19:41:29.225166 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:41:29.295915 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:41:29.310930 systemd-logind[1449]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:41:29.310951 systemd-logind[1449]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:41:29.313879 systemd-logind[1449]: New seat seat0. Feb 13 19:41:29.314901 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:41:29.330579 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Feb 13 19:41:29.354590 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 19:41:29.354590 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 13 19:41:29.354590 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Feb 13 19:41:29.361199 extend-filesystems[1441]: Resized filesystem in /dev/vda9 Feb 13 19:41:29.357629 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:41:29.357890 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:41:29.364788 bash[1494]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:41:29.368065 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:41:29.370915 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:41:29.375984 locksmithd[1487]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:41:29.464041 sshd_keygen[1461]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:41:29.563028 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:41:29.576706 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:41:29.579065 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:51750.service - OpenSSH per-connection server daemon (10.0.0.1:51750). Feb 13 19:41:29.588485 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:41:29.588732 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:41:29.597355 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:41:29.642309 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:41:29.651865 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:41:29.654397 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:41:29.655691 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:41:29.690199 sshd[1518]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:29.691179 sshd-session[1518]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:29.702880 systemd-logind[1449]: New session 1 of user core. Feb 13 19:41:29.704400 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:41:29.714398 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:41:29.724570 containerd[1467]: time="2025-02-13T19:41:29.723751705Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:41:29.738560 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:41:29.748765 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:41:29.752786 (systemd)[1530]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:41:29.753629 containerd[1467]: time="2025-02-13T19:41:29.753586325Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:29.791365 containerd[1467]: time="2025-02-13T19:41:29.791290324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:29.791365 containerd[1467]: time="2025-02-13T19:41:29.791359143Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:41:29.791507 containerd[1467]: time="2025-02-13T19:41:29.791384560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:41:29.791701 containerd[1467]: time="2025-02-13T19:41:29.791665076Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:41:29.791755 containerd[1467]: time="2025-02-13T19:41:29.791697427Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:29.791813 containerd[1467]: time="2025-02-13T19:41:29.791791944Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:29.791838 containerd[1467]: time="2025-02-13T19:41:29.791813375Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792151 containerd[1467]: time="2025-02-13T19:41:29.792121122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792180 containerd[1467]: time="2025-02-13T19:41:29.792149455Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792180 containerd[1467]: time="2025-02-13T19:41:29.792168080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792180 containerd[1467]: time="2025-02-13T19:41:29.792180794Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792346 containerd[1467]: time="2025-02-13T19:41:29.792322830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792689 containerd[1467]: time="2025-02-13T19:41:29.792667947Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792855 containerd[1467]: time="2025-02-13T19:41:29.792831784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:41:29.792881 containerd[1467]: time="2025-02-13T19:41:29.792858224Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:41:29.793005 containerd[1467]: time="2025-02-13T19:41:29.792976335Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:41:29.793100 containerd[1467]: time="2025-02-13T19:41:29.793048761Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:41:29.932077 systemd[1530]: Queued start job for default target default.target. Feb 13 19:41:29.941804 tar[1458]: linux-amd64/LICENSE Feb 13 19:41:29.941804 tar[1458]: linux-amd64/README.md Feb 13 19:41:29.942783 systemd[1530]: Created slice app.slice - User Application Slice. Feb 13 19:41:29.942809 systemd[1530]: Reached target paths.target - Paths. Feb 13 19:41:29.942822 systemd[1530]: Reached target timers.target - Timers. Feb 13 19:41:29.944362 systemd[1530]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:41:29.953145 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Feb 13 19:41:29.959108 systemd[1530]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:41:29.959254 systemd[1530]: Reached target sockets.target - Sockets. Feb 13 19:41:29.959275 systemd[1530]: Reached target basic.target - Basic System. Feb 13 19:41:29.959316 systemd[1530]: Reached target default.target - Main User Target. Feb 13 19:41:29.959350 systemd[1530]: Startup finished in 198ms. Feb 13 19:41:29.959712 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:41:29.967620 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:41:29.984822 containerd[1467]: time="2025-02-13T19:41:29.984769887Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:41:29.984903 containerd[1467]: time="2025-02-13T19:41:29.984844246Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:41:29.984903 containerd[1467]: time="2025-02-13T19:41:29.984861308Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:41:29.984903 containerd[1467]: time="2025-02-13T19:41:29.984880164Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:41:29.984903 containerd[1467]: time="2025-02-13T19:41:29.984895162Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:41:29.985109 containerd[1467]: time="2025-02-13T19:41:29.985073386Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:41:29.986114 containerd[1467]: time="2025-02-13T19:41:29.985871042Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:41:29.986114 containerd[1467]: time="2025-02-13T19:41:29.986100362Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:41:29.986253 containerd[1467]: time="2025-02-13T19:41:29.986208525Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:41:29.986299 containerd[1467]: time="2025-02-13T19:41:29.986281111Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:41:29.986333 containerd[1467]: time="2025-02-13T19:41:29.986313742Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986387 containerd[1467]: time="2025-02-13T19:41:29.986370949Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986408 containerd[1467]: time="2025-02-13T19:41:29.986393953Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986442 containerd[1467]: time="2025-02-13T19:41:29.986425111Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986470 containerd[1467]: time="2025-02-13T19:41:29.986447663Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986490 containerd[1467]: time="2025-02-13T19:41:29.986469795Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986528 containerd[1467]: time="2025-02-13T19:41:29.986490995Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986528 containerd[1467]: time="2025-02-13T19:41:29.986522764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:41:29.986565 containerd[1467]: time="2025-02-13T19:41:29.986555946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986585 containerd[1467]: time="2025-02-13T19:41:29.986573089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986615 containerd[1467]: time="2025-02-13T19:41:29.986589620Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986615 containerd[1467]: time="2025-02-13T19:41:29.986607022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986659 containerd[1467]: time="2025-02-13T19:41:29.986625126Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986878 containerd[1467]: time="2025-02-13T19:41:29.986833266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986907 containerd[1467]: time="2025-02-13T19:41:29.986882028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986907 containerd[1467]: time="2025-02-13T19:41:29.986900092Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986945 containerd[1467]: time="2025-02-13T19:41:29.986918075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986945 containerd[1467]: time="2025-02-13T19:41:29.986937682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986981 containerd[1467]: time="2025-02-13T19:41:29.986950737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.986981 containerd[1467]: time="2025-02-13T19:41:29.986965675Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.987023 containerd[1467]: time="2025-02-13T19:41:29.986990712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.987023 containerd[1467]: time="2025-02-13T19:41:29.987010890Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:41:29.987059 containerd[1467]: time="2025-02-13T19:41:29.987045865Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.987079 containerd[1467]: time="2025-02-13T19:41:29.987061394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.987079 containerd[1467]: time="2025-02-13T19:41:29.987073016Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:41:29.987208 containerd[1467]: time="2025-02-13T19:41:29.987180137Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:41:29.987231 containerd[1467]: time="2025-02-13T19:41:29.987216876Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:41:29.987254 containerd[1467]: time="2025-02-13T19:41:29.987229319Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:41:29.987254 containerd[1467]: time="2025-02-13T19:41:29.987243556Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:41:29.987290 containerd[1467]: time="2025-02-13T19:41:29.987254236Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.987290 containerd[1467]: time="2025-02-13T19:41:29.987270106Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:41:29.987290 containerd[1467]: time="2025-02-13T19:41:29.987286126Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:41:29.987349 containerd[1467]: time="2025-02-13T19:41:29.987296084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:41:29.987691 containerd[1467]: time="2025-02-13T19:41:29.987649076Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:41:29.987815 containerd[1467]: time="2025-02-13T19:41:29.987696876Z" level=info msg="Connect containerd service" Feb 13 19:41:29.987815 containerd[1467]: time="2025-02-13T19:41:29.987738664Z" level=info msg="using legacy CRI server" Feb 13 19:41:29.987815 containerd[1467]: time="2025-02-13T19:41:29.987748202Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:41:29.987911 containerd[1467]: time="2025-02-13T19:41:29.987894837Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:41:29.988719 containerd[1467]: time="2025-02-13T19:41:29.988685520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:41:29.988903 containerd[1467]: time="2025-02-13T19:41:29.988861670Z" level=info msg="Start subscribing containerd event" Feb 13 19:41:29.988956 containerd[1467]: time="2025-02-13T19:41:29.988936821Z" level=info msg="Start recovering state" Feb 13 19:41:29.989083 containerd[1467]: time="2025-02-13T19:41:29.989062016Z" level=info msg="Start event monitor" Feb 13 19:41:29.989109 containerd[1467]: time="2025-02-13T19:41:29.989067697Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:41:29.989109 containerd[1467]: time="2025-02-13T19:41:29.989098044Z" level=info msg="Start snapshots syncer" Feb 13 19:41:29.989161 containerd[1467]: time="2025-02-13T19:41:29.989118402Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:41:29.989161 containerd[1467]: time="2025-02-13T19:41:29.989128000Z" level=info msg="Start streaming server" Feb 13 19:41:29.989161 containerd[1467]: time="2025-02-13T19:41:29.989135995Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:41:29.989221 containerd[1467]: time="2025-02-13T19:41:29.989207629Z" level=info msg="containerd successfully booted in 0.266549s" Feb 13 19:41:29.989301 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:41:30.038207 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:51756.service - OpenSSH per-connection server daemon (10.0.0.1:51756). Feb 13 19:41:30.076530 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 51756 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:30.078033 sshd-session[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:30.082088 systemd-logind[1449]: New session 2 of user core. Feb 13 19:41:30.093624 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:41:30.148380 sshd[1548]: Connection closed by 10.0.0.1 port 51756 Feb 13 19:41:30.148782 sshd-session[1546]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:30.158055 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:51756.service: Deactivated successfully. Feb 13 19:41:30.159797 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 19:41:30.161311 systemd-logind[1449]: Session 2 logged out. Waiting for processes to exit. Feb 13 19:41:30.168731 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:51762.service - OpenSSH per-connection server daemon (10.0.0.1:51762). Feb 13 19:41:30.170974 systemd-logind[1449]: Removed session 2. Feb 13 19:41:30.187625 systemd-networkd[1396]: eth0: Gained IPv6LL Feb 13 19:41:30.190699 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:41:30.192571 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:41:30.206693 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Feb 13 19:41:30.209149 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:41:30.211326 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:41:30.225896 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 51762 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:30.226649 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:30.230143 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:41:30.230405 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Feb 13 19:41:30.232090 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:41:30.236116 systemd-logind[1449]: New session 3 of user core. Feb 13 19:41:30.246819 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:41:30.248325 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:41:30.302667 sshd[1572]: Connection closed by 10.0.0.1 port 51762 Feb 13 19:41:30.303053 sshd-session[1553]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:30.307166 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:51762.service: Deactivated successfully. Feb 13 19:41:30.309024 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 19:41:30.309711 systemd-logind[1449]: Session 3 logged out. Waiting for processes to exit. Feb 13 19:41:30.310617 systemd-logind[1449]: Removed session 3. Feb 13 19:41:31.221918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:41:31.223783 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:41:31.225065 systemd[1]: Startup finished in 708ms (kernel) + 5.142s (initrd) + 4.716s (userspace) = 10.567s. Feb 13 19:41:31.235627 agetty[1526]: failed to open credentials directory Feb 13 19:41:31.238400 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:41:31.251822 agetty[1525]: failed to open credentials directory Feb 13 19:41:31.937234 kubelet[1581]: E0213 19:41:31.937088 1581 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:41:31.941427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:41:31.941650 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:41:31.942025 systemd[1]: kubelet.service: Consumed 1.590s CPU time. Feb 13 19:41:40.314517 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:44582.service - OpenSSH per-connection server daemon (10.0.0.1:44582). Feb 13 19:41:40.354205 sshd[1595]: Accepted publickey for core from 10.0.0.1 port 44582 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:40.355760 sshd-session[1595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:40.360095 systemd-logind[1449]: New session 4 of user core. Feb 13 19:41:40.369670 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:41:40.423123 sshd[1597]: Connection closed by 10.0.0.1 port 44582 Feb 13 19:41:40.423542 sshd-session[1595]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:40.441299 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:44582.service: Deactivated successfully. Feb 13 19:41:40.443070 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:41:40.444676 systemd-logind[1449]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:41:40.470870 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:44598.service - OpenSSH per-connection server daemon (10.0.0.1:44598). Feb 13 19:41:40.471766 systemd-logind[1449]: Removed session 4. Feb 13 19:41:40.505269 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 44598 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:40.506473 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:40.510318 systemd-logind[1449]: New session 5 of user core. Feb 13 19:41:40.519614 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:41:40.568323 sshd[1604]: Connection closed by 10.0.0.1 port 44598 Feb 13 19:41:40.568653 sshd-session[1602]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:40.585486 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:44598.service: Deactivated successfully. Feb 13 19:41:40.587135 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:41:40.588408 systemd-logind[1449]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:41:40.589654 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:44612.service - OpenSSH per-connection server daemon (10.0.0.1:44612). Feb 13 19:41:40.590238 systemd-logind[1449]: Removed session 5. Feb 13 19:41:40.629332 sshd[1609]: Accepted publickey for core from 10.0.0.1 port 44612 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:40.630730 sshd-session[1609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:40.634529 systemd-logind[1449]: New session 6 of user core. Feb 13 19:41:40.644614 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:41:40.697258 sshd[1611]: Connection closed by 10.0.0.1 port 44612 Feb 13 19:41:40.697558 sshd-session[1609]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:40.711131 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:44612.service: Deactivated successfully. Feb 13 19:41:40.712679 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:41:40.714183 systemd-logind[1449]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:41:40.715388 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:44620.service - OpenSSH per-connection server daemon (10.0.0.1:44620). Feb 13 19:41:40.716039 systemd-logind[1449]: Removed session 6. Feb 13 19:41:40.754236 sshd[1616]: Accepted publickey for core from 10.0.0.1 port 44620 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:40.755590 sshd-session[1616]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:40.759162 systemd-logind[1449]: New session 7 of user core. Feb 13 19:41:40.775600 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:41:40.833577 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:41:40.833907 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:40.852648 sudo[1619]: pam_unix(sudo:session): session closed for user root Feb 13 19:41:40.854291 sshd[1618]: Connection closed by 10.0.0.1 port 44620 Feb 13 19:41:40.854684 sshd-session[1616]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:40.871285 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:44620.service: Deactivated successfully. Feb 13 19:41:40.872946 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:41:40.874301 systemd-logind[1449]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:41:40.875714 systemd[1]: Started sshd@7-10.0.0.106:22-10.0.0.1:44626.service - OpenSSH per-connection server daemon (10.0.0.1:44626). Feb 13 19:41:40.876379 systemd-logind[1449]: Removed session 7. Feb 13 19:41:40.913909 sshd[1624]: Accepted publickey for core from 10.0.0.1 port 44626 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:40.915190 sshd-session[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:40.918936 systemd-logind[1449]: New session 8 of user core. Feb 13 19:41:40.934621 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:41:40.987348 sudo[1628]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:41:40.987700 sudo[1628]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:40.991361 sudo[1628]: pam_unix(sudo:session): session closed for user root Feb 13 19:41:40.997689 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:41:40.998018 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:41.016773 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:41:41.047254 augenrules[1650]: No rules Feb 13 19:41:41.049235 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:41:41.049464 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:41:41.050738 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 19:41:41.052209 sshd[1626]: Connection closed by 10.0.0.1 port 44626 Feb 13 19:41:41.052621 sshd-session[1624]: pam_unix(sshd:session): session closed for user core Feb 13 19:41:41.063026 systemd[1]: sshd@7-10.0.0.106:22-10.0.0.1:44626.service: Deactivated successfully. Feb 13 19:41:41.064597 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:41:41.066433 systemd-logind[1449]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:41:41.076904 systemd[1]: Started sshd@8-10.0.0.106:22-10.0.0.1:44630.service - OpenSSH per-connection server daemon (10.0.0.1:44630). Feb 13 19:41:41.077748 systemd-logind[1449]: Removed session 8. Feb 13 19:41:41.113545 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 44630 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:41:41.115308 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:41:41.119445 systemd-logind[1449]: New session 9 of user core. Feb 13 19:41:41.126663 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:41:41.179632 sudo[1661]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:41:41.179972 sudo[1661]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:41:41.643724 systemd[1]: Starting docker.service - Docker Application Container Engine... Feb 13 19:41:41.644020 (dockerd)[1681]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Feb 13 19:41:42.191917 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:41:42.198958 dockerd[1681]: time="2025-02-13T19:41:42.198875715Z" level=info msg="Starting up" Feb 13 19:41:42.200782 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:41:42.586552 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:41:42.591269 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:41:42.611131 systemd[1]: var-lib-docker-metacopy\x2dcheck692525054-merged.mount: Deactivated successfully. Feb 13 19:41:42.644110 dockerd[1681]: time="2025-02-13T19:41:42.643809986Z" level=info msg="Loading containers: start." Feb 13 19:41:42.658126 kubelet[1713]: E0213 19:41:42.658077 1713 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:41:42.665624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:41:42.665863 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:41:42.819532 kernel: Initializing XFRM netlink socket Feb 13 19:41:42.902231 systemd-networkd[1396]: docker0: Link UP Feb 13 19:41:42.943952 dockerd[1681]: time="2025-02-13T19:41:42.943899040Z" level=info msg="Loading containers: done." Feb 13 19:41:42.966715 dockerd[1681]: time="2025-02-13T19:41:42.966658221Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 13 19:41:42.966937 dockerd[1681]: time="2025-02-13T19:41:42.966771875Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Feb 13 19:41:42.966937 dockerd[1681]: time="2025-02-13T19:41:42.966895236Z" level=info msg="Daemon has completed initialization" Feb 13 19:41:43.002951 dockerd[1681]: time="2025-02-13T19:41:43.002866304Z" level=info msg="API listen on /run/docker.sock" Feb 13 19:41:43.003138 systemd[1]: Started docker.service - Docker Application Container Engine. Feb 13 19:41:43.599428 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2568671985-merged.mount: Deactivated successfully. Feb 13 19:41:43.932269 containerd[1467]: time="2025-02-13T19:41:43.932136307Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\"" Feb 13 19:41:44.551086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484183891.mount: Deactivated successfully. Feb 13 19:41:45.636212 containerd[1467]: time="2025-02-13T19:41:45.636116669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:45.639814 containerd[1467]: time="2025-02-13T19:41:45.639695030Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.10: active requests=0, bytes read=32678214" Feb 13 19:41:45.644537 containerd[1467]: time="2025-02-13T19:41:45.644408790Z" level=info msg="ImageCreate event name:\"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:45.649932 containerd[1467]: time="2025-02-13T19:41:45.649831359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:45.651169 containerd[1467]: time="2025-02-13T19:41:45.651118533Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.10\" with image id \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:63b2b4b4e9b5dcb5b1b6cec9d5f5f538791a40cd8cb273ef530e6d6535aa0b43\", size \"32675014\" in 1.718924917s" Feb 13 19:41:45.651169 containerd[1467]: time="2025-02-13T19:41:45.651166763Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.10\" returns image reference \"sha256:172a4e0b731db1008c5339e0b8ef232f5c55632099e37cccfb9ba786c19580c4\"" Feb 13 19:41:45.704072 containerd[1467]: time="2025-02-13T19:41:45.704006026Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\"" Feb 13 19:41:47.572488 containerd[1467]: time="2025-02-13T19:41:47.572413827Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:47.573330 containerd[1467]: time="2025-02-13T19:41:47.573257299Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.10: active requests=0, bytes read=29611545" Feb 13 19:41:47.574553 containerd[1467]: time="2025-02-13T19:41:47.574479311Z" level=info msg="ImageCreate event name:\"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:47.577733 containerd[1467]: time="2025-02-13T19:41:47.577669684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:47.579123 containerd[1467]: time="2025-02-13T19:41:47.579071843Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.10\" with image id \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:99b3336343ea48be24f1e64774825e9f8d5170bd2ed482ff336548eb824f5f58\", size \"31058091\" in 1.875017537s" Feb 13 19:41:47.579123 containerd[1467]: time="2025-02-13T19:41:47.579120224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.10\" returns image reference \"sha256:f81ad4d47d77570472cf20a1f6b008ece135be405b2f52f50ed6820f2b6f9a5f\"" Feb 13 19:41:47.604114 containerd[1467]: time="2025-02-13T19:41:47.604047732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\"" Feb 13 19:41:49.288240 containerd[1467]: time="2025-02-13T19:41:49.288165474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:49.299315 containerd[1467]: time="2025-02-13T19:41:49.299203123Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.10: active requests=0, bytes read=17782130" Feb 13 19:41:49.309268 containerd[1467]: time="2025-02-13T19:41:49.309226691Z" level=info msg="ImageCreate event name:\"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:49.324710 containerd[1467]: time="2025-02-13T19:41:49.324645737Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:49.329272 containerd[1467]: time="2025-02-13T19:41:49.329232409Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.10\" with image id \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:cf7eb256192f1f51093fe278c209a9368f0675eb61ed01b148af47d2f21c002d\", size \"19228694\" in 1.725127269s" Feb 13 19:41:49.329340 containerd[1467]: time="2025-02-13T19:41:49.329290889Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.10\" returns image reference \"sha256:64edffde4bf75617ad8fc73556d5e80d34b9425c79106b7f74b2059243b2ffe8\"" Feb 13 19:41:49.354203 containerd[1467]: time="2025-02-13T19:41:49.354153675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\"" Feb 13 19:41:51.374094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount12711189.mount: Deactivated successfully. Feb 13 19:41:52.763770 containerd[1467]: time="2025-02-13T19:41:52.763694001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:52.770171 containerd[1467]: time="2025-02-13T19:41:52.770129429Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=29057858" Feb 13 19:41:52.778246 containerd[1467]: time="2025-02-13T19:41:52.778177693Z" level=info msg="ImageCreate event name:\"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:52.790643 containerd[1467]: time="2025-02-13T19:41:52.790592555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:52.791162 containerd[1467]: time="2025-02-13T19:41:52.791111578Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"29056877\" in 3.436911877s" Feb 13 19:41:52.791234 containerd[1467]: time="2025-02-13T19:41:52.791205695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:a21d1b47e857207628486a387f670f224051a16b74b06a1b76d07a96e738ab54\"" Feb 13 19:41:52.794833 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:41:52.802822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:41:52.816703 containerd[1467]: time="2025-02-13T19:41:52.816649000Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Feb 13 19:41:52.940139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:41:52.944581 (kubelet)[2007]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:41:53.046940 kubelet[2007]: E0213 19:41:53.046882 2007 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:41:53.052013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:41:53.052219 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:41:55.232562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3002087272.mount: Deactivated successfully. Feb 13 19:41:56.091534 containerd[1467]: time="2025-02-13T19:41:56.091447436Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.092321 containerd[1467]: time="2025-02-13T19:41:56.092250552Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Feb 13 19:41:56.093650 containerd[1467]: time="2025-02-13T19:41:56.093612086Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.096298 containerd[1467]: time="2025-02-13T19:41:56.096241978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.097529 containerd[1467]: time="2025-02-13T19:41:56.097493314Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 3.280796044s" Feb 13 19:41:56.097569 containerd[1467]: time="2025-02-13T19:41:56.097532017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Feb 13 19:41:56.120489 containerd[1467]: time="2025-02-13T19:41:56.120437894Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 13 19:41:56.845972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3563836734.mount: Deactivated successfully. Feb 13 19:41:56.851836 containerd[1467]: time="2025-02-13T19:41:56.851797654Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.852569 containerd[1467]: time="2025-02-13T19:41:56.852514509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Feb 13 19:41:56.853955 containerd[1467]: time="2025-02-13T19:41:56.853901730Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.856036 containerd[1467]: time="2025-02-13T19:41:56.855985267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:56.856733 containerd[1467]: time="2025-02-13T19:41:56.856669761Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 736.189889ms" Feb 13 19:41:56.856733 containerd[1467]: time="2025-02-13T19:41:56.856721097Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 13 19:41:56.882378 containerd[1467]: time="2025-02-13T19:41:56.882325154Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Feb 13 19:41:57.408874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount13057560.mount: Deactivated successfully. Feb 13 19:41:59.502189 containerd[1467]: time="2025-02-13T19:41:59.502113623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:59.502997 containerd[1467]: time="2025-02-13T19:41:59.502911619Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Feb 13 19:41:59.504365 containerd[1467]: time="2025-02-13T19:41:59.504334668Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:59.507825 containerd[1467]: time="2025-02-13T19:41:59.507766885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:41:59.509075 containerd[1467]: time="2025-02-13T19:41:59.509036205Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.62666821s" Feb 13 19:41:59.509075 containerd[1467]: time="2025-02-13T19:41:59.509068826Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Feb 13 19:42:02.473932 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:02.486712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:02.505199 systemd[1]: Reloading requested from client PID 2205 ('systemctl') (unit session-9.scope)... Feb 13 19:42:02.505216 systemd[1]: Reloading... Feb 13 19:42:02.599753 zram_generator::config[2247]: No configuration found. Feb 13 19:42:03.303412 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:03.381146 systemd[1]: Reloading finished in 875 ms. Feb 13 19:42:03.433096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:03.437402 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:42:03.437670 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:03.439164 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:03.584235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:03.589642 (kubelet)[2294]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:42:03.625683 kubelet[2294]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:03.626063 kubelet[2294]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:42:03.626063 kubelet[2294]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:03.626223 kubelet[2294]: I0213 19:42:03.626144 2294 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:42:03.878770 kubelet[2294]: I0213 19:42:03.878656 2294 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:42:03.878770 kubelet[2294]: I0213 19:42:03.878686 2294 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:42:03.878932 kubelet[2294]: I0213 19:42:03.878913 2294 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:42:03.893244 kubelet[2294]: I0213 19:42:03.892883 2294 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:42:03.893387 kubelet[2294]: E0213 19:42:03.893346 2294 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.905176 kubelet[2294]: I0213 19:42:03.905140 2294 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:42:03.906853 kubelet[2294]: I0213 19:42:03.906804 2294 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:42:03.907061 kubelet[2294]: I0213 19:42:03.906842 2294 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:42:03.907155 kubelet[2294]: I0213 19:42:03.907069 2294 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:42:03.907155 kubelet[2294]: I0213 19:42:03.907079 2294 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:42:03.907255 kubelet[2294]: I0213 19:42:03.907226 2294 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:03.907946 kubelet[2294]: I0213 19:42:03.907917 2294 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:42:03.907946 kubelet[2294]: I0213 19:42:03.907936 2294 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:42:03.908018 kubelet[2294]: I0213 19:42:03.907959 2294 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:42:03.908018 kubelet[2294]: I0213 19:42:03.907986 2294 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:42:03.910520 kubelet[2294]: W0213 19:42:03.910382 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.910520 kubelet[2294]: E0213 19:42:03.910453 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.911795 kubelet[2294]: W0213 19:42:03.911748 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.911795 kubelet[2294]: E0213 19:42:03.911790 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.913019 kubelet[2294]: I0213 19:42:03.912997 2294 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:42:03.914123 kubelet[2294]: I0213 19:42:03.914092 2294 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:42:03.914167 kubelet[2294]: W0213 19:42:03.914160 2294 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:42:03.914922 kubelet[2294]: I0213 19:42:03.914902 2294 server.go:1264] "Started kubelet" Feb 13 19:42:03.917517 kubelet[2294]: I0213 19:42:03.915010 2294 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:42:03.917517 kubelet[2294]: I0213 19:42:03.915662 2294 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:42:03.917517 kubelet[2294]: I0213 19:42:03.915699 2294 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:42:03.917517 kubelet[2294]: I0213 19:42:03.916457 2294 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:42:03.917517 kubelet[2294]: I0213 19:42:03.916709 2294 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:42:03.917843 kubelet[2294]: I0213 19:42:03.917830 2294 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:42:03.918002 kubelet[2294]: I0213 19:42:03.917990 2294 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:42:03.918524 kubelet[2294]: I0213 19:42:03.918512 2294 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:42:03.918949 kubelet[2294]: W0213 19:42:03.918882 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.919037 kubelet[2294]: E0213 19:42:03.918955 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.919953 kubelet[2294]: E0213 19:42:03.919899 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="200ms" Feb 13 19:42:03.920675 kubelet[2294]: I0213 19:42:03.920605 2294 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:42:03.921969 kubelet[2294]: E0213 19:42:03.921931 2294 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:42:03.922558 kubelet[2294]: E0213 19:42:03.922407 2294 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1823dbf46b5c972c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-02-13 19:42:03.914876716 +0000 UTC m=+0.321079720,LastTimestamp:2025-02-13 19:42:03.914876716 +0000 UTC m=+0.321079720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Feb 13 19:42:03.922747 kubelet[2294]: I0213 19:42:03.922723 2294 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:42:03.922747 kubelet[2294]: I0213 19:42:03.922738 2294 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:42:03.935727 kubelet[2294]: I0213 19:42:03.935689 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:42:03.937609 kubelet[2294]: I0213 19:42:03.937593 2294 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:42:03.937689 kubelet[2294]: I0213 19:42:03.937678 2294 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:42:03.937780 kubelet[2294]: I0213 19:42:03.937770 2294 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:03.937940 kubelet[2294]: I0213 19:42:03.937897 2294 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:42:03.937983 kubelet[2294]: I0213 19:42:03.937974 2294 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:42:03.938009 kubelet[2294]: I0213 19:42:03.938003 2294 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:42:03.938118 kubelet[2294]: E0213 19:42:03.938060 2294 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:42:03.938844 kubelet[2294]: W0213 19:42:03.938805 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:03.938881 kubelet[2294]: E0213 19:42:03.938846 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:04.018744 kubelet[2294]: I0213 19:42:04.018720 2294 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:42:04.019009 kubelet[2294]: E0213 19:42:04.018981 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Feb 13 19:42:04.038272 kubelet[2294]: E0213 19:42:04.038224 2294 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:42:04.121124 kubelet[2294]: E0213 19:42:04.121076 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="400ms" Feb 13 19:42:04.220210 kubelet[2294]: I0213 19:42:04.220117 2294 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:42:04.220479 kubelet[2294]: E0213 19:42:04.220428 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Feb 13 19:42:04.238683 kubelet[2294]: E0213 19:42:04.238662 2294 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:42:04.521771 kubelet[2294]: E0213 19:42:04.521642 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="800ms" Feb 13 19:42:04.622121 kubelet[2294]: I0213 19:42:04.622093 2294 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:42:04.622517 kubelet[2294]: E0213 19:42:04.622454 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Feb 13 19:42:04.639575 kubelet[2294]: E0213 19:42:04.639531 2294 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Feb 13 19:42:04.854746 kubelet[2294]: I0213 19:42:04.854706 2294 policy_none.go:49] "None policy: Start" Feb 13 19:42:04.855557 kubelet[2294]: I0213 19:42:04.855518 2294 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:42:04.855557 kubelet[2294]: I0213 19:42:04.855543 2294 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:42:04.924446 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:42:04.937344 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:42:04.949336 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:42:04.950431 kubelet[2294]: I0213 19:42:04.950393 2294 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:42:04.950708 kubelet[2294]: I0213 19:42:04.950657 2294 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:42:04.950968 kubelet[2294]: I0213 19:42:04.950804 2294 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:42:04.951791 kubelet[2294]: E0213 19:42:04.951768 2294 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Feb 13 19:42:05.044834 kubelet[2294]: W0213 19:42:05.044763 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.044834 kubelet[2294]: E0213 19:42:05.044827 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.099223 kubelet[2294]: W0213 19:42:05.099161 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.099223 kubelet[2294]: E0213 19:42:05.099217 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.172322 kubelet[2294]: W0213 19:42:05.172196 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.172322 kubelet[2294]: E0213 19:42:05.172242 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.304977 kubelet[2294]: W0213 19:42:05.304933 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.304977 kubelet[2294]: E0213 19:42:05.304959 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:05.322618 kubelet[2294]: E0213 19:42:05.322581 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="1.6s" Feb 13 19:42:05.424640 kubelet[2294]: I0213 19:42:05.424451 2294 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:42:05.424828 kubelet[2294]: E0213 19:42:05.424782 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Feb 13 19:42:05.440073 kubelet[2294]: I0213 19:42:05.440033 2294 topology_manager.go:215] "Topology Admit Handler" podUID="f88bc0da31c76f8416f47e2eb97daed0" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:42:05.440922 kubelet[2294]: I0213 19:42:05.440887 2294 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:42:05.441780 kubelet[2294]: I0213 19:42:05.441746 2294 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:42:05.447345 systemd[1]: Created slice kubepods-burstable-podf88bc0da31c76f8416f47e2eb97daed0.slice - libcontainer container kubepods-burstable-podf88bc0da31c76f8416f47e2eb97daed0.slice. Feb 13 19:42:05.456349 systemd[1]: Created slice kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice - libcontainer container kubepods-burstable-poddd3721fb1a67092819e35b40473f4063.slice. Feb 13 19:42:05.466167 systemd[1]: Created slice kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice - libcontainer container kubepods-burstable-pod8d610d6c43052dbc8df47eb68906a982.slice. Feb 13 19:42:05.524999 kubelet[2294]: I0213 19:42:05.524968 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:05.524999 kubelet[2294]: I0213 19:42:05.524998 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f88bc0da31c76f8416f47e2eb97daed0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88bc0da31c76f8416f47e2eb97daed0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:05.525109 kubelet[2294]: I0213 19:42:05.525014 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:05.525109 kubelet[2294]: I0213 19:42:05.525027 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:05.525109 kubelet[2294]: I0213 19:42:05.525044 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:05.525109 kubelet[2294]: I0213 19:42:05.525058 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f88bc0da31c76f8416f47e2eb97daed0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88bc0da31c76f8416f47e2eb97daed0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:05.525109 kubelet[2294]: I0213 19:42:05.525071 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f88bc0da31c76f8416f47e2eb97daed0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f88bc0da31c76f8416f47e2eb97daed0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:05.525218 kubelet[2294]: I0213 19:42:05.525084 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:05.525218 kubelet[2294]: I0213 19:42:05.525097 2294 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:05.755294 kubelet[2294]: E0213 19:42:05.755171 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:05.756056 containerd[1467]: time="2025-02-13T19:42:05.756011156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f88bc0da31c76f8416f47e2eb97daed0,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:05.764640 kubelet[2294]: E0213 19:42:05.764616 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:05.765001 containerd[1467]: time="2025-02-13T19:42:05.764964318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:05.768411 kubelet[2294]: E0213 19:42:05.768385 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:05.768902 containerd[1467]: time="2025-02-13T19:42:05.768829758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:05.902550 kubelet[2294]: E0213 19:42:05.902514 2294 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:06.705700 kubelet[2294]: W0213 19:42:06.705640 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:06.705700 kubelet[2294]: E0213 19:42:06.705687 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:06.923931 kubelet[2294]: E0213 19:42:06.923869 2294 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.106:6443: connect: connection refused" interval="3.2s" Feb 13 19:42:07.026385 kubelet[2294]: I0213 19:42:07.026285 2294 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:42:07.026635 kubelet[2294]: E0213 19:42:07.026611 2294 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.106:6443/api/v1/nodes\": dial tcp 10.0.0.106:6443: connect: connection refused" node="localhost" Feb 13 19:42:07.330979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3855092534.mount: Deactivated successfully. Feb 13 19:42:07.411365 containerd[1467]: time="2025-02-13T19:42:07.411283576Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:07.423624 kubelet[2294]: W0213 19:42:07.423552 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:07.423698 kubelet[2294]: E0213 19:42:07.423634 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:07.424377 containerd[1467]: time="2025-02-13T19:42:07.424319573Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:42:07.464333 containerd[1467]: time="2025-02-13T19:42:07.464264226Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:07.478288 kubelet[2294]: W0213 19:42:07.478219 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:07.478288 kubelet[2294]: E0213 19:42:07.478263 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:07.514177 containerd[1467]: time="2025-02-13T19:42:07.514140668Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:07.535307 containerd[1467]: time="2025-02-13T19:42:07.535268423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:42:07.580851 containerd[1467]: time="2025-02-13T19:42:07.580781953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:07.581760 containerd[1467]: time="2025-02-13T19:42:07.581653252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.825486329s" Feb 13 19:42:07.600525 containerd[1467]: time="2025-02-13T19:42:07.600459660Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:42:07.601523 containerd[1467]: time="2025-02-13T19:42:07.601458179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:42:07.652082 containerd[1467]: time="2025-02-13T19:42:07.652028923Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.8831284s" Feb 13 19:42:07.652917 containerd[1467]: time="2025-02-13T19:42:07.652876075Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.887839739s" Feb 13 19:42:07.918418 containerd[1467]: time="2025-02-13T19:42:07.917976859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:07.918418 containerd[1467]: time="2025-02-13T19:42:07.918039769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:07.918418 containerd[1467]: time="2025-02-13T19:42:07.918055078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:07.918418 containerd[1467]: time="2025-02-13T19:42:07.918174676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:07.920079 containerd[1467]: time="2025-02-13T19:42:07.917440108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:07.920179 containerd[1467]: time="2025-02-13T19:42:07.920097806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:07.920179 containerd[1467]: time="2025-02-13T19:42:07.920137261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:07.920365 containerd[1467]: time="2025-02-13T19:42:07.920316071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:07.920365 containerd[1467]: time="2025-02-13T19:42:07.920272068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:07.920487 containerd[1467]: time="2025-02-13T19:42:07.920381406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:07.920692 containerd[1467]: time="2025-02-13T19:42:07.920538204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:07.921087 containerd[1467]: time="2025-02-13T19:42:07.920731701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:07.942675 systemd[1]: Started cri-containerd-9129f65bf6e6bc05dce1d68210ef635ea165451793088c419ea8db29c3e898c4.scope - libcontainer container 9129f65bf6e6bc05dce1d68210ef635ea165451793088c419ea8db29c3e898c4. Feb 13 19:42:07.947045 systemd[1]: Started cri-containerd-12c506f0b1c2960ab1e8ab5e7bced523f862ff2a05327231e03627668dc79337.scope - libcontainer container 12c506f0b1c2960ab1e8ab5e7bced523f862ff2a05327231e03627668dc79337. Feb 13 19:42:07.948953 systemd[1]: Started cri-containerd-e9be7f06ef95e0ad3e75240eca790acc61db4cd0bd0237eaf72ea5253bf5852d.scope - libcontainer container e9be7f06ef95e0ad3e75240eca790acc61db4cd0bd0237eaf72ea5253bf5852d. Feb 13 19:42:07.984819 containerd[1467]: time="2025-02-13T19:42:07.984773681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f88bc0da31c76f8416f47e2eb97daed0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9129f65bf6e6bc05dce1d68210ef635ea165451793088c419ea8db29c3e898c4\"" Feb 13 19:42:07.986137 kubelet[2294]: E0213 19:42:07.986013 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:07.989523 containerd[1467]: time="2025-02-13T19:42:07.989356292Z" level=info msg="CreateContainer within sandbox \"9129f65bf6e6bc05dce1d68210ef635ea165451793088c419ea8db29c3e898c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 13 19:42:07.991326 containerd[1467]: time="2025-02-13T19:42:07.991306693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:dd3721fb1a67092819e35b40473f4063,Namespace:kube-system,Attempt:0,} returns sandbox id \"12c506f0b1c2960ab1e8ab5e7bced523f862ff2a05327231e03627668dc79337\"" Feb 13 19:42:07.992103 kubelet[2294]: E0213 19:42:07.992051 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:07.994584 containerd[1467]: time="2025-02-13T19:42:07.994546900Z" level=info msg="CreateContainer within sandbox \"12c506f0b1c2960ab1e8ab5e7bced523f862ff2a05327231e03627668dc79337\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 13 19:42:07.997376 containerd[1467]: time="2025-02-13T19:42:07.996229601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8d610d6c43052dbc8df47eb68906a982,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9be7f06ef95e0ad3e75240eca790acc61db4cd0bd0237eaf72ea5253bf5852d\"" Feb 13 19:42:07.997794 kubelet[2294]: E0213 19:42:07.997764 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:07.999247 containerd[1467]: time="2025-02-13T19:42:07.999182861Z" level=info msg="CreateContainer within sandbox \"e9be7f06ef95e0ad3e75240eca790acc61db4cd0bd0237eaf72ea5253bf5852d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 13 19:42:08.033445 kubelet[2294]: W0213 19:42:08.033407 2294 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:08.033540 kubelet[2294]: E0213 19:42:08.033453 2294 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.106:6443: connect: connection refused Feb 13 19:42:08.339295 containerd[1467]: time="2025-02-13T19:42:08.339231646Z" level=info msg="CreateContainer within sandbox \"9129f65bf6e6bc05dce1d68210ef635ea165451793088c419ea8db29c3e898c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a6549d5d2b25de5b7da1a7d24add9f2fcba8440207330d1d49dfb0a9a4e3b734\"" Feb 13 19:42:08.340160 containerd[1467]: time="2025-02-13T19:42:08.340108443Z" level=info msg="StartContainer for \"a6549d5d2b25de5b7da1a7d24add9f2fcba8440207330d1d49dfb0a9a4e3b734\"" Feb 13 19:42:08.371675 systemd[1]: Started cri-containerd-a6549d5d2b25de5b7da1a7d24add9f2fcba8440207330d1d49dfb0a9a4e3b734.scope - libcontainer container a6549d5d2b25de5b7da1a7d24add9f2fcba8440207330d1d49dfb0a9a4e3b734. Feb 13 19:42:08.477830 containerd[1467]: time="2025-02-13T19:42:08.477763126Z" level=info msg="CreateContainer within sandbox \"e9be7f06ef95e0ad3e75240eca790acc61db4cd0bd0237eaf72ea5253bf5852d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"58a813acce4fa52aab0c09f434d0a538c7768224e00a3811386f9afa59aa0135\"" Feb 13 19:42:08.478285 containerd[1467]: time="2025-02-13T19:42:08.477843529Z" level=info msg="CreateContainer within sandbox \"12c506f0b1c2960ab1e8ab5e7bced523f862ff2a05327231e03627668dc79337\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e6caf042841d8324e1813671471554283b48c94c09c5eaff27221bd1d451780d\"" Feb 13 19:42:08.478285 containerd[1467]: time="2025-02-13T19:42:08.477773997Z" level=info msg="StartContainer for \"a6549d5d2b25de5b7da1a7d24add9f2fcba8440207330d1d49dfb0a9a4e3b734\" returns successfully" Feb 13 19:42:08.479518 containerd[1467]: time="2025-02-13T19:42:08.478551355Z" level=info msg="StartContainer for \"e6caf042841d8324e1813671471554283b48c94c09c5eaff27221bd1d451780d\"" Feb 13 19:42:08.479518 containerd[1467]: time="2025-02-13T19:42:08.478582535Z" level=info msg="StartContainer for \"58a813acce4fa52aab0c09f434d0a538c7768224e00a3811386f9afa59aa0135\"" Feb 13 19:42:08.506654 systemd[1]: Started cri-containerd-e6caf042841d8324e1813671471554283b48c94c09c5eaff27221bd1d451780d.scope - libcontainer container e6caf042841d8324e1813671471554283b48c94c09c5eaff27221bd1d451780d. Feb 13 19:42:08.510532 systemd[1]: Started cri-containerd-58a813acce4fa52aab0c09f434d0a538c7768224e00a3811386f9afa59aa0135.scope - libcontainer container 58a813acce4fa52aab0c09f434d0a538c7768224e00a3811386f9afa59aa0135. Feb 13 19:42:08.632778 containerd[1467]: time="2025-02-13T19:42:08.632374157Z" level=info msg="StartContainer for \"e6caf042841d8324e1813671471554283b48c94c09c5eaff27221bd1d451780d\" returns successfully" Feb 13 19:42:08.632778 containerd[1467]: time="2025-02-13T19:42:08.632519313Z" level=info msg="StartContainer for \"58a813acce4fa52aab0c09f434d0a538c7768224e00a3811386f9afa59aa0135\" returns successfully" Feb 13 19:42:08.949388 kubelet[2294]: E0213 19:42:08.949273 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:08.954589 kubelet[2294]: E0213 19:42:08.954488 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:08.962534 kubelet[2294]: E0213 19:42:08.962339 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:09.880202 kubelet[2294]: E0213 19:42:09.880125 2294 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Feb 13 19:42:09.961272 kubelet[2294]: E0213 19:42:09.961224 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:10.169541 kubelet[2294]: E0213 19:42:10.169400 2294 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Feb 13 19:42:10.228233 kubelet[2294]: I0213 19:42:10.228183 2294 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:42:10.277152 kubelet[2294]: I0213 19:42:10.277092 2294 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:42:10.283054 kubelet[2294]: E0213 19:42:10.283005 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.383944 kubelet[2294]: E0213 19:42:10.383902 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.484996 kubelet[2294]: E0213 19:42:10.484854 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.585522 kubelet[2294]: E0213 19:42:10.585432 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.686108 kubelet[2294]: E0213 19:42:10.686010 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.786886 kubelet[2294]: E0213 19:42:10.786742 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.887426 kubelet[2294]: E0213 19:42:10.887361 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:10.987583 kubelet[2294]: E0213 19:42:10.987535 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.088111 kubelet[2294]: E0213 19:42:11.088060 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.188659 kubelet[2294]: E0213 19:42:11.188610 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.289246 kubelet[2294]: E0213 19:42:11.289195 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.389916 kubelet[2294]: E0213 19:42:11.389766 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.490813 kubelet[2294]: E0213 19:42:11.490750 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.591423 kubelet[2294]: E0213 19:42:11.591374 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.692079 kubelet[2294]: E0213 19:42:11.691946 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.792757 kubelet[2294]: E0213 19:42:11.792673 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.893305 kubelet[2294]: E0213 19:42:11.893252 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:11.986988 kubelet[2294]: E0213 19:42:11.986870 2294 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:11.993683 kubelet[2294]: E0213 19:42:11.993634 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.094336 kubelet[2294]: E0213 19:42:12.094285 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.194937 kubelet[2294]: E0213 19:42:12.194865 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.295115 kubelet[2294]: E0213 19:42:12.295081 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.395521 kubelet[2294]: E0213 19:42:12.395460 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.496254 kubelet[2294]: E0213 19:42:12.496205 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.597382 kubelet[2294]: E0213 19:42:12.597303 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.697972 kubelet[2294]: E0213 19:42:12.697900 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.798751 kubelet[2294]: E0213 19:42:12.798683 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:12.899641 kubelet[2294]: E0213 19:42:12.899474 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:13.000029 kubelet[2294]: E0213 19:42:12.999978 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:13.081142 systemd[1]: Reloading requested from client PID 2572 ('systemctl') (unit session-9.scope)... Feb 13 19:42:13.081162 systemd[1]: Reloading... Feb 13 19:42:13.100833 kubelet[2294]: E0213 19:42:13.100719 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:13.179548 zram_generator::config[2611]: No configuration found. Feb 13 19:42:13.200905 kubelet[2294]: E0213 19:42:13.200849 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:13.297561 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:42:13.301407 kubelet[2294]: E0213 19:42:13.301349 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:13.389951 systemd[1]: Reloading finished in 308 ms. Feb 13 19:42:13.402314 kubelet[2294]: E0213 19:42:13.402261 2294 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Feb 13 19:42:13.449358 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:13.461420 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:42:13.461828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:13.471010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:42:13.618880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:42:13.623775 (kubelet)[2656]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:42:13.664560 kubelet[2656]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:13.664560 kubelet[2656]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:42:13.664560 kubelet[2656]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:42:13.664940 kubelet[2656]: I0213 19:42:13.664617 2656 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:42:13.669111 kubelet[2656]: I0213 19:42:13.669087 2656 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Feb 13 19:42:13.669111 kubelet[2656]: I0213 19:42:13.669105 2656 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:42:13.669267 kubelet[2656]: I0213 19:42:13.669251 2656 server.go:927] "Client rotation is on, will bootstrap in background" Feb 13 19:42:13.670450 kubelet[2656]: I0213 19:42:13.670431 2656 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 13 19:42:13.671481 kubelet[2656]: I0213 19:42:13.671445 2656 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:42:13.682209 kubelet[2656]: I0213 19:42:13.679988 2656 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:42:13.682209 kubelet[2656]: I0213 19:42:13.680253 2656 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:42:13.682209 kubelet[2656]: I0213 19:42:13.680276 2656 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 13 19:42:13.682209 kubelet[2656]: I0213 19:42:13.680556 2656 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:42:13.682399 kubelet[2656]: I0213 19:42:13.680575 2656 container_manager_linux.go:301] "Creating device plugin manager" Feb 13 19:42:13.682399 kubelet[2656]: I0213 19:42:13.680619 2656 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:13.682399 kubelet[2656]: I0213 19:42:13.680719 2656 kubelet.go:400] "Attempting to sync node with API server" Feb 13 19:42:13.682399 kubelet[2656]: I0213 19:42:13.680730 2656 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:42:13.682399 kubelet[2656]: I0213 19:42:13.680752 2656 kubelet.go:312] "Adding apiserver pod source" Feb 13 19:42:13.682399 kubelet[2656]: I0213 19:42:13.680771 2656 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:42:13.683081 kubelet[2656]: I0213 19:42:13.683024 2656 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:42:13.683289 kubelet[2656]: I0213 19:42:13.683257 2656 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:42:13.686199 kubelet[2656]: I0213 19:42:13.686176 2656 server.go:1264] "Started kubelet" Feb 13 19:42:13.687967 kubelet[2656]: I0213 19:42:13.686908 2656 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:42:13.687967 kubelet[2656]: I0213 19:42:13.687476 2656 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:42:13.687967 kubelet[2656]: I0213 19:42:13.687539 2656 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:42:13.687967 kubelet[2656]: I0213 19:42:13.687684 2656 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:42:13.688628 kubelet[2656]: I0213 19:42:13.688603 2656 server.go:455] "Adding debug handlers to kubelet server" Feb 13 19:42:13.692410 kubelet[2656]: E0213 19:42:13.692131 2656 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:42:13.692511 kubelet[2656]: I0213 19:42:13.692455 2656 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 13 19:42:13.692671 kubelet[2656]: I0213 19:42:13.692654 2656 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 19:42:13.692779 kubelet[2656]: I0213 19:42:13.692762 2656 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:42:13.693366 kubelet[2656]: I0213 19:42:13.693348 2656 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:42:13.694009 kubelet[2656]: I0213 19:42:13.693971 2656 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:42:13.695327 kubelet[2656]: I0213 19:42:13.695294 2656 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:42:13.701136 kubelet[2656]: I0213 19:42:13.701021 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:42:13.702681 kubelet[2656]: I0213 19:42:13.702656 2656 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:42:13.702733 kubelet[2656]: I0213 19:42:13.702685 2656 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:42:13.702733 kubelet[2656]: I0213 19:42:13.702703 2656 kubelet.go:2337] "Starting kubelet main sync loop" Feb 13 19:42:13.702819 kubelet[2656]: E0213 19:42:13.702794 2656 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 13 19:42:13.733012 kubelet[2656]: I0213 19:42:13.732893 2656 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:42:13.733012 kubelet[2656]: I0213 19:42:13.732914 2656 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:42:13.733012 kubelet[2656]: I0213 19:42:13.732933 2656 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:42:13.733192 kubelet[2656]: I0213 19:42:13.733083 2656 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 13 19:42:13.733192 kubelet[2656]: I0213 19:42:13.733093 2656 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 13 19:42:13.733192 kubelet[2656]: I0213 19:42:13.733112 2656 policy_none.go:49] "None policy: Start" Feb 13 19:42:13.733732 kubelet[2656]: I0213 19:42:13.733705 2656 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:42:13.733814 kubelet[2656]: I0213 19:42:13.733781 2656 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:42:13.733992 kubelet[2656]: I0213 19:42:13.733975 2656 state_mem.go:75] "Updated machine memory state" Feb 13 19:42:13.738651 kubelet[2656]: I0213 19:42:13.738621 2656 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:42:13.738878 kubelet[2656]: I0213 19:42:13.738817 2656 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:42:13.738934 kubelet[2656]: I0213 19:42:13.738920 2656 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:42:13.797241 kubelet[2656]: I0213 19:42:13.797203 2656 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Feb 13 19:42:13.803301 kubelet[2656]: I0213 19:42:13.803269 2656 topology_manager.go:215] "Topology Admit Handler" podUID="f88bc0da31c76f8416f47e2eb97daed0" podNamespace="kube-system" podName="kube-apiserver-localhost" Feb 13 19:42:13.803372 kubelet[2656]: I0213 19:42:13.803355 2656 topology_manager.go:215] "Topology Admit Handler" podUID="dd3721fb1a67092819e35b40473f4063" podNamespace="kube-system" podName="kube-controller-manager-localhost" Feb 13 19:42:13.803422 kubelet[2656]: I0213 19:42:13.803409 2656 topology_manager.go:215] "Topology Admit Handler" podUID="8d610d6c43052dbc8df47eb68906a982" podNamespace="kube-system" podName="kube-scheduler-localhost" Feb 13 19:42:13.820040 kubelet[2656]: I0213 19:42:13.819973 2656 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Feb 13 19:42:13.820167 kubelet[2656]: I0213 19:42:13.820065 2656 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Feb 13 19:42:13.893130 kubelet[2656]: I0213 19:42:13.893083 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8d610d6c43052dbc8df47eb68906a982-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8d610d6c43052dbc8df47eb68906a982\") " pod="kube-system/kube-scheduler-localhost" Feb 13 19:42:13.893130 kubelet[2656]: I0213 19:42:13.893126 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f88bc0da31c76f8416f47e2eb97daed0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f88bc0da31c76f8416f47e2eb97daed0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:13.893310 kubelet[2656]: I0213 19:42:13.893151 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:13.893310 kubelet[2656]: I0213 19:42:13.893175 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:13.893310 kubelet[2656]: I0213 19:42:13.893195 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:13.893310 kubelet[2656]: I0213 19:42:13.893216 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:13.893310 kubelet[2656]: I0213 19:42:13.893234 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f88bc0da31c76f8416f47e2eb97daed0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88bc0da31c76f8416f47e2eb97daed0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:13.893417 kubelet[2656]: I0213 19:42:13.893257 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f88bc0da31c76f8416f47e2eb97daed0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f88bc0da31c76f8416f47e2eb97daed0\") " pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:13.893417 kubelet[2656]: I0213 19:42:13.893286 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dd3721fb1a67092819e35b40473f4063-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"dd3721fb1a67092819e35b40473f4063\") " pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:14.128479 kubelet[2656]: E0213 19:42:14.125886 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.128479 kubelet[2656]: E0213 19:42:14.126367 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.128479 kubelet[2656]: E0213 19:42:14.126918 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.660525 update_engine[1453]: I20250213 19:42:14.658548 1453 update_attempter.cc:509] Updating boot flags... Feb 13 19:42:14.686525 kubelet[2656]: I0213 19:42:14.682209 2656 apiserver.go:52] "Watching apiserver" Feb 13 19:42:14.694693 kubelet[2656]: I0213 19:42:14.694604 2656 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 19:42:14.719713 kubelet[2656]: E0213 19:42:14.719072 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.733527 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2705) Feb 13 19:42:14.744883 kubelet[2656]: E0213 19:42:14.744845 2656 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Feb 13 19:42:14.745251 kubelet[2656]: E0213 19:42:14.745233 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.745837 kubelet[2656]: E0213 19:42:14.745780 2656 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Feb 13 19:42:14.746253 kubelet[2656]: E0213 19:42:14.746229 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:14.784555 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2704) Feb 13 19:42:14.803539 kubelet[2656]: I0213 19:42:14.802867 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.802847462 podStartE2EDuration="1.802847462s" podCreationTimestamp="2025-02-13 19:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:14.801542922 +0000 UTC m=+1.173812475" watchObservedRunningTime="2025-02-13 19:42:14.802847462 +0000 UTC m=+1.175117015" Feb 13 19:42:14.803539 kubelet[2656]: I0213 19:42:14.803002 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.802990913 podStartE2EDuration="1.802990913s" podCreationTimestamp="2025-02-13 19:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:14.771162735 +0000 UTC m=+1.143432288" watchObservedRunningTime="2025-02-13 19:42:14.802990913 +0000 UTC m=+1.175260466" Feb 13 19:42:15.720918 kubelet[2656]: E0213 19:42:15.720887 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:15.721396 kubelet[2656]: E0213 19:42:15.721205 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:15.721396 kubelet[2656]: E0213 19:42:15.721389 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:16.722376 kubelet[2656]: E0213 19:42:16.722347 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:18.084578 sudo[1661]: pam_unix(sudo:session): session closed for user root Feb 13 19:42:18.085898 sshd[1660]: Connection closed by 10.0.0.1 port 44630 Feb 13 19:42:18.086308 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:18.091114 systemd[1]: sshd@8-10.0.0.106:22-10.0.0.1:44630.service: Deactivated successfully. Feb 13 19:42:18.093055 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:42:18.093260 systemd[1]: session-9.scope: Consumed 5.494s CPU time, 189.6M memory peak, 0B memory swap peak. Feb 13 19:42:18.093851 systemd-logind[1449]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:42:18.094961 systemd-logind[1449]: Removed session 9. Feb 13 19:42:23.958033 kubelet[2656]: E0213 19:42:23.957984 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:24.055016 kubelet[2656]: I0213 19:42:24.054950 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=11.054933441 podStartE2EDuration="11.054933441s" podCreationTimestamp="2025-02-13 19:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:14.81991548 +0000 UTC m=+1.192185033" watchObservedRunningTime="2025-02-13 19:42:24.054933441 +0000 UTC m=+10.427202994" Feb 13 19:42:24.732872 kubelet[2656]: E0213 19:42:24.732839 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:25.086005 kubelet[2656]: E0213 19:42:25.085980 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:25.288809 kubelet[2656]: E0213 19:42:25.288785 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:26.813306 kubelet[2656]: I0213 19:42:26.813131 2656 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 13 19:42:26.813831 kubelet[2656]: I0213 19:42:26.813790 2656 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 13 19:42:26.813870 containerd[1467]: time="2025-02-13T19:42:26.813603865Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:42:26.838520 kubelet[2656]: I0213 19:42:26.838256 2656 topology_manager.go:215] "Topology Admit Handler" podUID="f7d63b5d-e41d-492b-a7b3-c17d02065a4e" podNamespace="kube-system" podName="kube-proxy-fxtx5" Feb 13 19:42:26.848232 systemd[1]: Created slice kubepods-besteffort-podf7d63b5d_e41d_492b_a7b3_c17d02065a4e.slice - libcontainer container kubepods-besteffort-podf7d63b5d_e41d_492b_a7b3_c17d02065a4e.slice. Feb 13 19:42:26.881050 kubelet[2656]: I0213 19:42:26.880991 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f7d63b5d-e41d-492b-a7b3-c17d02065a4e-xtables-lock\") pod \"kube-proxy-fxtx5\" (UID: \"f7d63b5d-e41d-492b-a7b3-c17d02065a4e\") " pod="kube-system/kube-proxy-fxtx5" Feb 13 19:42:26.881050 kubelet[2656]: I0213 19:42:26.881037 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmrx\" (UniqueName: \"kubernetes.io/projected/f7d63b5d-e41d-492b-a7b3-c17d02065a4e-kube-api-access-8qmrx\") pod \"kube-proxy-fxtx5\" (UID: \"f7d63b5d-e41d-492b-a7b3-c17d02065a4e\") " pod="kube-system/kube-proxy-fxtx5" Feb 13 19:42:26.881204 kubelet[2656]: I0213 19:42:26.881069 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f7d63b5d-e41d-492b-a7b3-c17d02065a4e-lib-modules\") pod \"kube-proxy-fxtx5\" (UID: \"f7d63b5d-e41d-492b-a7b3-c17d02065a4e\") " pod="kube-system/kube-proxy-fxtx5" Feb 13 19:42:26.881204 kubelet[2656]: I0213 19:42:26.881094 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f7d63b5d-e41d-492b-a7b3-c17d02065a4e-kube-proxy\") pod \"kube-proxy-fxtx5\" (UID: \"f7d63b5d-e41d-492b-a7b3-c17d02065a4e\") " pod="kube-system/kube-proxy-fxtx5" Feb 13 19:42:27.157374 kubelet[2656]: E0213 19:42:27.157236 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:27.158061 containerd[1467]: time="2025-02-13T19:42:27.158017703Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fxtx5,Uid:f7d63b5d-e41d-492b-a7b3-c17d02065a4e,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:27.183965 containerd[1467]: time="2025-02-13T19:42:27.183845294Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:27.183965 containerd[1467]: time="2025-02-13T19:42:27.183915286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:27.183965 containerd[1467]: time="2025-02-13T19:42:27.183932138Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:27.184225 containerd[1467]: time="2025-02-13T19:42:27.184035332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:27.201617 systemd[1]: run-containerd-runc-k8s.io-f3fd3e0579c83b6b3f557997b1707a5900bf8ce8a694f93999b4a12f5f1bf044-runc.X4ZrE3.mount: Deactivated successfully. Feb 13 19:42:27.215660 systemd[1]: Started cri-containerd-f3fd3e0579c83b6b3f557997b1707a5900bf8ce8a694f93999b4a12f5f1bf044.scope - libcontainer container f3fd3e0579c83b6b3f557997b1707a5900bf8ce8a694f93999b4a12f5f1bf044. Feb 13 19:42:27.237238 containerd[1467]: time="2025-02-13T19:42:27.237197023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fxtx5,Uid:f7d63b5d-e41d-492b-a7b3-c17d02065a4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3fd3e0579c83b6b3f557997b1707a5900bf8ce8a694f93999b4a12f5f1bf044\"" Feb 13 19:42:27.238085 kubelet[2656]: E0213 19:42:27.238048 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:27.240213 containerd[1467]: time="2025-02-13T19:42:27.240177696Z" level=info msg="CreateContainer within sandbox \"f3fd3e0579c83b6b3f557997b1707a5900bf8ce8a694f93999b4a12f5f1bf044\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:42:27.259091 containerd[1467]: time="2025-02-13T19:42:27.259046715Z" level=info msg="CreateContainer within sandbox \"f3fd3e0579c83b6b3f557997b1707a5900bf8ce8a694f93999b4a12f5f1bf044\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"88c2438a29e7206e58d94dba3391492d281ef5305a332dce5f8109804fd2ccea\"" Feb 13 19:42:27.259765 containerd[1467]: time="2025-02-13T19:42:27.259618311Z" level=info msg="StartContainer for \"88c2438a29e7206e58d94dba3391492d281ef5305a332dce5f8109804fd2ccea\"" Feb 13 19:42:27.287644 systemd[1]: Started cri-containerd-88c2438a29e7206e58d94dba3391492d281ef5305a332dce5f8109804fd2ccea.scope - libcontainer container 88c2438a29e7206e58d94dba3391492d281ef5305a332dce5f8109804fd2ccea. Feb 13 19:42:27.321941 containerd[1467]: time="2025-02-13T19:42:27.321524257Z" level=info msg="StartContainer for \"88c2438a29e7206e58d94dba3391492d281ef5305a332dce5f8109804fd2ccea\" returns successfully" Feb 13 19:42:27.420399 kubelet[2656]: I0213 19:42:27.419809 2656 topology_manager.go:215] "Topology Admit Handler" podUID="cbba647e-6057-48cc-9bec-c22dd010bb73" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-hxbq2" Feb 13 19:42:27.428487 systemd[1]: Created slice kubepods-besteffort-podcbba647e_6057_48cc_9bec_c22dd010bb73.slice - libcontainer container kubepods-besteffort-podcbba647e_6057_48cc_9bec_c22dd010bb73.slice. Feb 13 19:42:27.484296 kubelet[2656]: I0213 19:42:27.484245 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/cbba647e-6057-48cc-9bec-c22dd010bb73-var-lib-calico\") pod \"tigera-operator-7bc55997bb-hxbq2\" (UID: \"cbba647e-6057-48cc-9bec-c22dd010bb73\") " pod="tigera-operator/tigera-operator-7bc55997bb-hxbq2" Feb 13 19:42:27.484296 kubelet[2656]: I0213 19:42:27.484290 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n57vv\" (UniqueName: \"kubernetes.io/projected/cbba647e-6057-48cc-9bec-c22dd010bb73-kube-api-access-n57vv\") pod \"tigera-operator-7bc55997bb-hxbq2\" (UID: \"cbba647e-6057-48cc-9bec-c22dd010bb73\") " pod="tigera-operator/tigera-operator-7bc55997bb-hxbq2" Feb 13 19:42:27.732324 containerd[1467]: time="2025-02-13T19:42:27.732181492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-hxbq2,Uid:cbba647e-6057-48cc-9bec-c22dd010bb73,Namespace:tigera-operator,Attempt:0,}" Feb 13 19:42:27.737671 kubelet[2656]: E0213 19:42:27.737644 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:27.758910 containerd[1467]: time="2025-02-13T19:42:27.758635374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:27.758910 containerd[1467]: time="2025-02-13T19:42:27.758719883Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:27.758910 containerd[1467]: time="2025-02-13T19:42:27.758741003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:27.758910 containerd[1467]: time="2025-02-13T19:42:27.758855528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:27.778754 systemd[1]: Started cri-containerd-4e58741a6c5e41af39d5d30c7a40f557eef294b2229d93870a720a11b3df44ac.scope - libcontainer container 4e58741a6c5e41af39d5d30c7a40f557eef294b2229d93870a720a11b3df44ac. Feb 13 19:42:27.816542 containerd[1467]: time="2025-02-13T19:42:27.816434958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-hxbq2,Uid:cbba647e-6057-48cc-9bec-c22dd010bb73,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4e58741a6c5e41af39d5d30c7a40f557eef294b2229d93870a720a11b3df44ac\"" Feb 13 19:42:27.818328 containerd[1467]: time="2025-02-13T19:42:27.818298897Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Feb 13 19:42:29.677824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494113347.mount: Deactivated successfully. Feb 13 19:42:29.965921 containerd[1467]: time="2025-02-13T19:42:29.965792545Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:29.966758 containerd[1467]: time="2025-02-13T19:42:29.966672962Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Feb 13 19:42:29.967933 containerd[1467]: time="2025-02-13T19:42:29.967898168Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:29.970666 containerd[1467]: time="2025-02-13T19:42:29.970634278Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:29.971513 containerd[1467]: time="2025-02-13T19:42:29.971457006Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.153124625s" Feb 13 19:42:29.971513 containerd[1467]: time="2025-02-13T19:42:29.971510828Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Feb 13 19:42:29.973900 containerd[1467]: time="2025-02-13T19:42:29.973809864Z" level=info msg="CreateContainer within sandbox \"4e58741a6c5e41af39d5d30c7a40f557eef294b2229d93870a720a11b3df44ac\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 13 19:42:29.988046 containerd[1467]: time="2025-02-13T19:42:29.988007134Z" level=info msg="CreateContainer within sandbox \"4e58741a6c5e41af39d5d30c7a40f557eef294b2229d93870a720a11b3df44ac\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1337a207bb060aebe900c503b2380ed5e069ff6bde3dcadf68bcc9acb8dba2a5\"" Feb 13 19:42:29.988765 containerd[1467]: time="2025-02-13T19:42:29.988466228Z" level=info msg="StartContainer for \"1337a207bb060aebe900c503b2380ed5e069ff6bde3dcadf68bcc9acb8dba2a5\"" Feb 13 19:42:30.028801 systemd[1]: Started cri-containerd-1337a207bb060aebe900c503b2380ed5e069ff6bde3dcadf68bcc9acb8dba2a5.scope - libcontainer container 1337a207bb060aebe900c503b2380ed5e069ff6bde3dcadf68bcc9acb8dba2a5. Feb 13 19:42:30.064486 containerd[1467]: time="2025-02-13T19:42:30.064367459Z" level=info msg="StartContainer for \"1337a207bb060aebe900c503b2380ed5e069ff6bde3dcadf68bcc9acb8dba2a5\" returns successfully" Feb 13 19:42:30.752754 kubelet[2656]: I0213 19:42:30.752684 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fxtx5" podStartSLOduration=4.752656842 podStartE2EDuration="4.752656842s" podCreationTimestamp="2025-02-13 19:42:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:27.745025738 +0000 UTC m=+14.117295291" watchObservedRunningTime="2025-02-13 19:42:30.752656842 +0000 UTC m=+17.124926395" Feb 13 19:42:30.753417 kubelet[2656]: I0213 19:42:30.752832 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-hxbq2" podStartSLOduration=1.598277624 podStartE2EDuration="3.752824698s" podCreationTimestamp="2025-02-13 19:42:27 +0000 UTC" firstStartedPulling="2025-02-13 19:42:27.817899936 +0000 UTC m=+14.190169489" lastFinishedPulling="2025-02-13 19:42:29.97244701 +0000 UTC m=+16.344716563" observedRunningTime="2025-02-13 19:42:30.75252312 +0000 UTC m=+17.124792674" watchObservedRunningTime="2025-02-13 19:42:30.752824698 +0000 UTC m=+17.125094261" Feb 13 19:42:33.040825 kubelet[2656]: I0213 19:42:33.040769 2656 topology_manager.go:215] "Topology Admit Handler" podUID="d6047c8b-9844-4361-b64e-f988dcd7fb95" podNamespace="calico-system" podName="calico-typha-556fd4d7fc-p7gx5" Feb 13 19:42:33.050830 systemd[1]: Created slice kubepods-besteffort-podd6047c8b_9844_4361_b64e_f988dcd7fb95.slice - libcontainer container kubepods-besteffort-podd6047c8b_9844_4361_b64e_f988dcd7fb95.slice. Feb 13 19:42:33.109961 kubelet[2656]: I0213 19:42:33.109901 2656 topology_manager.go:215] "Topology Admit Handler" podUID="4ea7edef-9a4b-4418-b02a-3078a036d08e" podNamespace="calico-system" podName="calico-node-wj95g" Feb 13 19:42:33.118915 systemd[1]: Created slice kubepods-besteffort-pod4ea7edef_9a4b_4418_b02a_3078a036d08e.slice - libcontainer container kubepods-besteffort-pod4ea7edef_9a4b_4418_b02a_3078a036d08e.slice. Feb 13 19:42:33.121859 kubelet[2656]: I0213 19:42:33.120295 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d6047c8b-9844-4361-b64e-f988dcd7fb95-typha-certs\") pod \"calico-typha-556fd4d7fc-p7gx5\" (UID: \"d6047c8b-9844-4361-b64e-f988dcd7fb95\") " pod="calico-system/calico-typha-556fd4d7fc-p7gx5" Feb 13 19:42:33.121859 kubelet[2656]: I0213 19:42:33.120352 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-flexvol-driver-host\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.121859 kubelet[2656]: I0213 19:42:33.120380 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-policysync\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.121859 kubelet[2656]: I0213 19:42:33.120403 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ea7edef-9a4b-4418-b02a-3078a036d08e-tigera-ca-bundle\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.121859 kubelet[2656]: I0213 19:42:33.120424 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-var-run-calico\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122061 kubelet[2656]: I0213 19:42:33.120446 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krkb8\" (UniqueName: \"kubernetes.io/projected/d6047c8b-9844-4361-b64e-f988dcd7fb95-kube-api-access-krkb8\") pod \"calico-typha-556fd4d7fc-p7gx5\" (UID: \"d6047c8b-9844-4361-b64e-f988dcd7fb95\") " pod="calico-system/calico-typha-556fd4d7fc-p7gx5" Feb 13 19:42:33.122061 kubelet[2656]: I0213 19:42:33.120467 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-cni-log-dir\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122061 kubelet[2656]: I0213 19:42:33.120490 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4ea7edef-9a4b-4418-b02a-3078a036d08e-node-certs\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122061 kubelet[2656]: I0213 19:42:33.120528 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-xtables-lock\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122061 kubelet[2656]: I0213 19:42:33.120549 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-cni-net-dir\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122176 kubelet[2656]: I0213 19:42:33.120572 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmbmr\" (UniqueName: \"kubernetes.io/projected/4ea7edef-9a4b-4418-b02a-3078a036d08e-kube-api-access-qmbmr\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122176 kubelet[2656]: I0213 19:42:33.120597 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-lib-modules\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122176 kubelet[2656]: I0213 19:42:33.120616 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-var-lib-calico\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122176 kubelet[2656]: I0213 19:42:33.120653 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4ea7edef-9a4b-4418-b02a-3078a036d08e-cni-bin-dir\") pod \"calico-node-wj95g\" (UID: \"4ea7edef-9a4b-4418-b02a-3078a036d08e\") " pod="calico-system/calico-node-wj95g" Feb 13 19:42:33.122176 kubelet[2656]: I0213 19:42:33.120676 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6047c8b-9844-4361-b64e-f988dcd7fb95-tigera-ca-bundle\") pod \"calico-typha-556fd4d7fc-p7gx5\" (UID: \"d6047c8b-9844-4361-b64e-f988dcd7fb95\") " pod="calico-system/calico-typha-556fd4d7fc-p7gx5" Feb 13 19:42:33.224488 kubelet[2656]: E0213 19:42:33.224443 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.224488 kubelet[2656]: W0213 19:42:33.224478 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.224655 kubelet[2656]: E0213 19:42:33.224610 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.224681 kubelet[2656]: I0213 19:42:33.224646 2656 topology_manager.go:215] "Topology Admit Handler" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" podNamespace="calico-system" podName="csi-node-driver-qxpzd" Feb 13 19:42:33.225338 kubelet[2656]: E0213 19:42:33.224940 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:33.225338 kubelet[2656]: E0213 19:42:33.224962 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.225338 kubelet[2656]: W0213 19:42:33.225238 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.225338 kubelet[2656]: E0213 19:42:33.225279 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.225781 kubelet[2656]: E0213 19:42:33.225646 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.225781 kubelet[2656]: W0213 19:42:33.225663 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.225860 kubelet[2656]: E0213 19:42:33.225788 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.226772 kubelet[2656]: E0213 19:42:33.226731 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.226772 kubelet[2656]: W0213 19:42:33.226770 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.227543 kubelet[2656]: E0213 19:42:33.226918 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.227543 kubelet[2656]: E0213 19:42:33.227198 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.227543 kubelet[2656]: W0213 19:42:33.227208 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.227543 kubelet[2656]: E0213 19:42:33.227287 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.227656 kubelet[2656]: E0213 19:42:33.227558 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.227656 kubelet[2656]: W0213 19:42:33.227569 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.227701 kubelet[2656]: E0213 19:42:33.227672 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.227887 kubelet[2656]: E0213 19:42:33.227869 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.227887 kubelet[2656]: W0213 19:42:33.227882 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.228007 kubelet[2656]: E0213 19:42:33.227965 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.228305 kubelet[2656]: E0213 19:42:33.228207 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.228305 kubelet[2656]: W0213 19:42:33.228224 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.229621 kubelet[2656]: E0213 19:42:33.228439 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.229621 kubelet[2656]: E0213 19:42:33.228743 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.229621 kubelet[2656]: W0213 19:42:33.228754 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.229621 kubelet[2656]: E0213 19:42:33.228880 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.229621 kubelet[2656]: E0213 19:42:33.229436 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.229621 kubelet[2656]: W0213 19:42:33.229448 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.229621 kubelet[2656]: E0213 19:42:33.229565 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.229777 kubelet[2656]: E0213 19:42:33.229737 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.229777 kubelet[2656]: W0213 19:42:33.229747 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.230055 kubelet[2656]: E0213 19:42:33.229811 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.230250 kubelet[2656]: E0213 19:42:33.230229 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.230250 kubelet[2656]: W0213 19:42:33.230243 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.232700 kubelet[2656]: E0213 19:42:33.231024 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.236893 kubelet[2656]: E0213 19:42:33.236683 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.236893 kubelet[2656]: W0213 19:42:33.236712 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.236893 kubelet[2656]: E0213 19:42:33.236890 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.239519 kubelet[2656]: E0213 19:42:33.237124 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.239519 kubelet[2656]: W0213 19:42:33.237138 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.239519 kubelet[2656]: E0213 19:42:33.237231 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.239519 kubelet[2656]: E0213 19:42:33.237367 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.239519 kubelet[2656]: W0213 19:42:33.237374 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.239519 kubelet[2656]: E0213 19:42:33.237440 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.239519 kubelet[2656]: E0213 19:42:33.237840 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.239519 kubelet[2656]: W0213 19:42:33.237848 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.239519 kubelet[2656]: E0213 19:42:33.237918 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.241089 kubelet[2656]: E0213 19:42:33.241054 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.241089 kubelet[2656]: W0213 19:42:33.241081 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.241238 kubelet[2656]: E0213 19:42:33.241214 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.243932 kubelet[2656]: E0213 19:42:33.243219 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.243932 kubelet[2656]: W0213 19:42:33.243237 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.243932 kubelet[2656]: E0213 19:42:33.243351 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.243932 kubelet[2656]: E0213 19:42:33.243516 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.243932 kubelet[2656]: W0213 19:42:33.243538 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.243932 kubelet[2656]: E0213 19:42:33.243646 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.244296 kubelet[2656]: E0213 19:42:33.244159 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.244296 kubelet[2656]: W0213 19:42:33.244169 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.244296 kubelet[2656]: E0213 19:42:33.244256 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.244715 kubelet[2656]: E0213 19:42:33.244694 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.244715 kubelet[2656]: W0213 19:42:33.244707 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.244878 kubelet[2656]: E0213 19:42:33.244730 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.244971 kubelet[2656]: E0213 19:42:33.244954 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.244971 kubelet[2656]: W0213 19:42:33.244966 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.245179 kubelet[2656]: E0213 19:42:33.244978 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.245398 kubelet[2656]: E0213 19:42:33.245379 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.245398 kubelet[2656]: W0213 19:42:33.245392 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.245458 kubelet[2656]: E0213 19:42:33.245401 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.248668 kubelet[2656]: E0213 19:42:33.246443 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.248668 kubelet[2656]: W0213 19:42:33.246455 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.248668 kubelet[2656]: E0213 19:42:33.246466 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.269474 kubelet[2656]: E0213 19:42:33.269440 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.269641 kubelet[2656]: W0213 19:42:33.269486 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.269641 kubelet[2656]: E0213 19:42:33.269546 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.315910 kubelet[2656]: E0213 19:42:33.315877 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.315910 kubelet[2656]: W0213 19:42:33.315903 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.316103 kubelet[2656]: E0213 19:42:33.315931 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.316304 kubelet[2656]: E0213 19:42:33.316269 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.316356 kubelet[2656]: W0213 19:42:33.316304 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.316356 kubelet[2656]: E0213 19:42:33.316317 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.316786 kubelet[2656]: E0213 19:42:33.316759 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.316786 kubelet[2656]: W0213 19:42:33.316773 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.316837 kubelet[2656]: E0213 19:42:33.316786 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.317047 kubelet[2656]: E0213 19:42:33.317030 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.317082 kubelet[2656]: W0213 19:42:33.317049 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.317082 kubelet[2656]: E0213 19:42:33.317061 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.317316 kubelet[2656]: E0213 19:42:33.317301 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.317316 kubelet[2656]: W0213 19:42:33.317314 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.317380 kubelet[2656]: E0213 19:42:33.317332 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.317559 kubelet[2656]: E0213 19:42:33.317544 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.317603 kubelet[2656]: W0213 19:42:33.317557 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.317603 kubelet[2656]: E0213 19:42:33.317569 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.317807 kubelet[2656]: E0213 19:42:33.317773 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.317807 kubelet[2656]: W0213 19:42:33.317786 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.317807 kubelet[2656]: E0213 19:42:33.317796 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.318017 kubelet[2656]: E0213 19:42:33.317986 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.318017 kubelet[2656]: W0213 19:42:33.317997 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.318017 kubelet[2656]: E0213 19:42:33.318007 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.318310 kubelet[2656]: E0213 19:42:33.318278 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.318310 kubelet[2656]: W0213 19:42:33.318306 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.318412 kubelet[2656]: E0213 19:42:33.318345 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.318693 kubelet[2656]: E0213 19:42:33.318646 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.318693 kubelet[2656]: W0213 19:42:33.318658 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.318693 kubelet[2656]: E0213 19:42:33.318668 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.318899 kubelet[2656]: E0213 19:42:33.318882 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.318899 kubelet[2656]: W0213 19:42:33.318896 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.318982 kubelet[2656]: E0213 19:42:33.318906 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.319160 kubelet[2656]: E0213 19:42:33.319143 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.319160 kubelet[2656]: W0213 19:42:33.319154 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.319256 kubelet[2656]: E0213 19:42:33.319164 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.319430 kubelet[2656]: E0213 19:42:33.319414 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.319430 kubelet[2656]: W0213 19:42:33.319424 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.319522 kubelet[2656]: E0213 19:42:33.319434 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.319684 kubelet[2656]: E0213 19:42:33.319669 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.319684 kubelet[2656]: W0213 19:42:33.319679 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.319684 kubelet[2656]: E0213 19:42:33.319688 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.319919 kubelet[2656]: E0213 19:42:33.319897 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.319919 kubelet[2656]: W0213 19:42:33.319907 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.319919 kubelet[2656]: E0213 19:42:33.319915 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.320153 kubelet[2656]: E0213 19:42:33.320137 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.320153 kubelet[2656]: W0213 19:42:33.320148 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.320228 kubelet[2656]: E0213 19:42:33.320157 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.320432 kubelet[2656]: E0213 19:42:33.320414 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.320432 kubelet[2656]: W0213 19:42:33.320427 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.320513 kubelet[2656]: E0213 19:42:33.320437 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.320798 kubelet[2656]: E0213 19:42:33.320758 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.320798 kubelet[2656]: W0213 19:42:33.320793 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.320910 kubelet[2656]: E0213 19:42:33.320804 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.321089 kubelet[2656]: E0213 19:42:33.321069 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.321089 kubelet[2656]: W0213 19:42:33.321087 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.321182 kubelet[2656]: E0213 19:42:33.321104 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.321425 kubelet[2656]: E0213 19:42:33.321397 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.321425 kubelet[2656]: W0213 19:42:33.321412 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.321425 kubelet[2656]: E0213 19:42:33.321425 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.321827 kubelet[2656]: E0213 19:42:33.321810 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.321827 kubelet[2656]: W0213 19:42:33.321825 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.321900 kubelet[2656]: E0213 19:42:33.321838 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.321900 kubelet[2656]: I0213 19:42:33.321871 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b54cda8b-5691-4984-90d2-94b24a2518d5-kubelet-dir\") pod \"csi-node-driver-qxpzd\" (UID: \"b54cda8b-5691-4984-90d2-94b24a2518d5\") " pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:33.322105 kubelet[2656]: E0213 19:42:33.322087 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.322153 kubelet[2656]: W0213 19:42:33.322105 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.322153 kubelet[2656]: E0213 19:42:33.322124 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.322153 kubelet[2656]: I0213 19:42:33.322142 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b54cda8b-5691-4984-90d2-94b24a2518d5-registration-dir\") pod \"csi-node-driver-qxpzd\" (UID: \"b54cda8b-5691-4984-90d2-94b24a2518d5\") " pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:33.322429 kubelet[2656]: E0213 19:42:33.322392 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.322429 kubelet[2656]: W0213 19:42:33.322409 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.322531 kubelet[2656]: E0213 19:42:33.322437 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.322531 kubelet[2656]: I0213 19:42:33.322456 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b54cda8b-5691-4984-90d2-94b24a2518d5-socket-dir\") pod \"csi-node-driver-qxpzd\" (UID: \"b54cda8b-5691-4984-90d2-94b24a2518d5\") " pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:33.322768 kubelet[2656]: E0213 19:42:33.322742 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.322768 kubelet[2656]: W0213 19:42:33.322761 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.322892 kubelet[2656]: E0213 19:42:33.322779 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.322892 kubelet[2656]: I0213 19:42:33.322812 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b54cda8b-5691-4984-90d2-94b24a2518d5-varrun\") pod \"csi-node-driver-qxpzd\" (UID: \"b54cda8b-5691-4984-90d2-94b24a2518d5\") " pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:33.323058 kubelet[2656]: E0213 19:42:33.323038 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.323058 kubelet[2656]: W0213 19:42:33.323051 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.323152 kubelet[2656]: E0213 19:42:33.323069 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.323383 kubelet[2656]: E0213 19:42:33.323366 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.323383 kubelet[2656]: W0213 19:42:33.323381 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.323451 kubelet[2656]: E0213 19:42:33.323399 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.323652 kubelet[2656]: E0213 19:42:33.323638 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.323652 kubelet[2656]: W0213 19:42:33.323650 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.323710 kubelet[2656]: E0213 19:42:33.323666 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.323885 kubelet[2656]: E0213 19:42:33.323869 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.323885 kubelet[2656]: W0213 19:42:33.323882 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.324068 kubelet[2656]: E0213 19:42:33.323926 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.324115 kubelet[2656]: E0213 19:42:33.324078 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.324115 kubelet[2656]: W0213 19:42:33.324087 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.324193 kubelet[2656]: E0213 19:42:33.324164 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.324305 kubelet[2656]: E0213 19:42:33.324289 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.324305 kubelet[2656]: W0213 19:42:33.324299 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.324393 kubelet[2656]: E0213 19:42:33.324335 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.324393 kubelet[2656]: I0213 19:42:33.324369 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmzxj\" (UniqueName: \"kubernetes.io/projected/b54cda8b-5691-4984-90d2-94b24a2518d5-kube-api-access-dmzxj\") pod \"csi-node-driver-qxpzd\" (UID: \"b54cda8b-5691-4984-90d2-94b24a2518d5\") " pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:33.324591 kubelet[2656]: E0213 19:42:33.324578 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.324591 kubelet[2656]: W0213 19:42:33.324588 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.324655 kubelet[2656]: E0213 19:42:33.324614 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.324856 kubelet[2656]: E0213 19:42:33.324842 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.325049 kubelet[2656]: W0213 19:42:33.324906 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.325049 kubelet[2656]: E0213 19:42:33.324924 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.325205 kubelet[2656]: E0213 19:42:33.325192 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.325285 kubelet[2656]: W0213 19:42:33.325263 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.325348 kubelet[2656]: E0213 19:42:33.325286 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.325575 kubelet[2656]: E0213 19:42:33.325558 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.325575 kubelet[2656]: W0213 19:42:33.325572 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.325639 kubelet[2656]: E0213 19:42:33.325584 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.325840 kubelet[2656]: E0213 19:42:33.325823 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.325840 kubelet[2656]: W0213 19:42:33.325837 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.325890 kubelet[2656]: E0213 19:42:33.325849 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.357412 kubelet[2656]: E0213 19:42:33.357058 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:33.357971 containerd[1467]: time="2025-02-13T19:42:33.357916949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556fd4d7fc-p7gx5,Uid:d6047c8b-9844-4361-b64e-f988dcd7fb95,Namespace:calico-system,Attempt:0,}" Feb 13 19:42:33.383192 containerd[1467]: time="2025-02-13T19:42:33.383021639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:33.383192 containerd[1467]: time="2025-02-13T19:42:33.383141305Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:33.383192 containerd[1467]: time="2025-02-13T19:42:33.383157735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:33.384015 containerd[1467]: time="2025-02-13T19:42:33.383288110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:33.403648 systemd[1]: Started cri-containerd-cee7a484e03a0104a8bd79e3a5250310f33b7357c12a1583f5d2469361d603bc.scope - libcontainer container cee7a484e03a0104a8bd79e3a5250310f33b7357c12a1583f5d2469361d603bc. Feb 13 19:42:33.422490 kubelet[2656]: E0213 19:42:33.422459 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:33.422938 containerd[1467]: time="2025-02-13T19:42:33.422903868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wj95g,Uid:4ea7edef-9a4b-4418-b02a-3078a036d08e,Namespace:calico-system,Attempt:0,}" Feb 13 19:42:33.426210 kubelet[2656]: E0213 19:42:33.426148 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.426210 kubelet[2656]: W0213 19:42:33.426207 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.426453 kubelet[2656]: E0213 19:42:33.426234 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.426742 kubelet[2656]: E0213 19:42:33.426724 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.426820 kubelet[2656]: W0213 19:42:33.426748 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.426820 kubelet[2656]: E0213 19:42:33.426766 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.427106 kubelet[2656]: E0213 19:42:33.427080 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.427106 kubelet[2656]: W0213 19:42:33.427101 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.427212 kubelet[2656]: E0213 19:42:33.427126 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.427400 kubelet[2656]: E0213 19:42:33.427371 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.427400 kubelet[2656]: W0213 19:42:33.427385 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.427400 kubelet[2656]: E0213 19:42:33.427400 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.427738 kubelet[2656]: E0213 19:42:33.427673 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.427738 kubelet[2656]: W0213 19:42:33.427681 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.427738 kubelet[2656]: E0213 19:42:33.427696 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.427992 kubelet[2656]: E0213 19:42:33.427956 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.427992 kubelet[2656]: W0213 19:42:33.427995 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.428205 kubelet[2656]: E0213 19:42:33.428121 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.428205 kubelet[2656]: E0213 19:42:33.428193 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.428205 kubelet[2656]: W0213 19:42:33.428201 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.428407 kubelet[2656]: E0213 19:42:33.428358 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.428442 kubelet[2656]: E0213 19:42:33.428429 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.428442 kubelet[2656]: W0213 19:42:33.428438 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.428536 kubelet[2656]: E0213 19:42:33.428448 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.428710 kubelet[2656]: E0213 19:42:33.428694 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.428710 kubelet[2656]: W0213 19:42:33.428705 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.428797 kubelet[2656]: E0213 19:42:33.428720 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.428922 kubelet[2656]: E0213 19:42:33.428888 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.428922 kubelet[2656]: W0213 19:42:33.428900 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.428922 kubelet[2656]: E0213 19:42:33.428912 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.429145 kubelet[2656]: E0213 19:42:33.429127 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.429145 kubelet[2656]: W0213 19:42:33.429141 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.429203 kubelet[2656]: E0213 19:42:33.429158 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.429441 kubelet[2656]: E0213 19:42:33.429421 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.429441 kubelet[2656]: W0213 19:42:33.429438 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.429517 kubelet[2656]: E0213 19:42:33.429488 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.429879 kubelet[2656]: E0213 19:42:33.429855 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.429879 kubelet[2656]: W0213 19:42:33.429867 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.429934 kubelet[2656]: E0213 19:42:33.429902 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.430175 kubelet[2656]: E0213 19:42:33.430158 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.430175 kubelet[2656]: W0213 19:42:33.430171 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.430429 kubelet[2656]: E0213 19:42:33.430394 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.430747 kubelet[2656]: E0213 19:42:33.430731 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.430747 kubelet[2656]: W0213 19:42:33.430744 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.430833 kubelet[2656]: E0213 19:42:33.430818 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.431636 kubelet[2656]: E0213 19:42:33.431589 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.431636 kubelet[2656]: W0213 19:42:33.431634 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.431784 kubelet[2656]: E0213 19:42:33.431767 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.431991 kubelet[2656]: E0213 19:42:33.431963 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.431991 kubelet[2656]: W0213 19:42:33.431981 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.432085 kubelet[2656]: E0213 19:42:33.432071 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.432344 kubelet[2656]: E0213 19:42:33.432314 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.432383 kubelet[2656]: W0213 19:42:33.432371 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.432446 kubelet[2656]: E0213 19:42:33.432428 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.433467 kubelet[2656]: E0213 19:42:33.433424 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.433467 kubelet[2656]: W0213 19:42:33.433436 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.433467 kubelet[2656]: E0213 19:42:33.433451 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.435601 kubelet[2656]: E0213 19:42:33.435558 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.435601 kubelet[2656]: W0213 19:42:33.435571 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.435831 kubelet[2656]: E0213 19:42:33.435814 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.436837 kubelet[2656]: E0213 19:42:33.436806 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.436837 kubelet[2656]: W0213 19:42:33.436823 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.437106 kubelet[2656]: E0213 19:42:33.436984 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.437106 kubelet[2656]: E0213 19:42:33.437091 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.437106 kubelet[2656]: W0213 19:42:33.437099 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.437221 kubelet[2656]: E0213 19:42:33.437193 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.437453 kubelet[2656]: E0213 19:42:33.437431 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.437453 kubelet[2656]: W0213 19:42:33.437444 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.437528 kubelet[2656]: E0213 19:42:33.437516 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.437719 kubelet[2656]: E0213 19:42:33.437697 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.437719 kubelet[2656]: W0213 19:42:33.437712 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.437792 kubelet[2656]: E0213 19:42:33.437725 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.438157 kubelet[2656]: E0213 19:42:33.438140 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.438157 kubelet[2656]: W0213 19:42:33.438154 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.438223 kubelet[2656]: E0213 19:42:33.438164 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.441189 kubelet[2656]: E0213 19:42:33.441126 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:33.441189 kubelet[2656]: W0213 19:42:33.441141 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:33.441385 kubelet[2656]: E0213 19:42:33.441264 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:33.445196 containerd[1467]: time="2025-02-13T19:42:33.445160101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-556fd4d7fc-p7gx5,Uid:d6047c8b-9844-4361-b64e-f988dcd7fb95,Namespace:calico-system,Attempt:0,} returns sandbox id \"cee7a484e03a0104a8bd79e3a5250310f33b7357c12a1583f5d2469361d603bc\"" Feb 13 19:42:33.446202 kubelet[2656]: E0213 19:42:33.446173 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:33.447268 containerd[1467]: time="2025-02-13T19:42:33.447211368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 19:42:33.449724 containerd[1467]: time="2025-02-13T19:42:33.449638794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:33.449724 containerd[1467]: time="2025-02-13T19:42:33.449688056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:33.449724 containerd[1467]: time="2025-02-13T19:42:33.449701551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:33.449930 containerd[1467]: time="2025-02-13T19:42:33.449779769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:33.472651 systemd[1]: Started cri-containerd-e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730.scope - libcontainer container e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730. Feb 13 19:42:33.496344 containerd[1467]: time="2025-02-13T19:42:33.496269133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wj95g,Uid:4ea7edef-9a4b-4418-b02a-3078a036d08e,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730\"" Feb 13 19:42:33.497279 kubelet[2656]: E0213 19:42:33.497246 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:34.703116 kubelet[2656]: E0213 19:42:34.703075 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:36.061416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount694119598.mount: Deactivated successfully. Feb 13 19:42:36.457953 containerd[1467]: time="2025-02-13T19:42:36.457831611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:36.458694 containerd[1467]: time="2025-02-13T19:42:36.458633418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Feb 13 19:42:36.459786 containerd[1467]: time="2025-02-13T19:42:36.459734558Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:36.461995 containerd[1467]: time="2025-02-13T19:42:36.461954981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:36.462533 containerd[1467]: time="2025-02-13T19:42:36.462485939Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.0152105s" Feb 13 19:42:36.462533 containerd[1467]: time="2025-02-13T19:42:36.462530513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 19:42:36.465531 containerd[1467]: time="2025-02-13T19:42:36.465482722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:42:36.488604 containerd[1467]: time="2025-02-13T19:42:36.488546282Z" level=info msg="CreateContainer within sandbox \"cee7a484e03a0104a8bd79e3a5250310f33b7357c12a1583f5d2469361d603bc\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 19:42:36.502516 containerd[1467]: time="2025-02-13T19:42:36.502453782Z" level=info msg="CreateContainer within sandbox \"cee7a484e03a0104a8bd79e3a5250310f33b7357c12a1583f5d2469361d603bc\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3b05acf54e35f66bef428bad3fe4a09ce7eb65bed801efed8121deb7bb038546\"" Feb 13 19:42:36.503736 containerd[1467]: time="2025-02-13T19:42:36.503651515Z" level=info msg="StartContainer for \"3b05acf54e35f66bef428bad3fe4a09ce7eb65bed801efed8121deb7bb038546\"" Feb 13 19:42:36.536704 systemd[1]: Started cri-containerd-3b05acf54e35f66bef428bad3fe4a09ce7eb65bed801efed8121deb7bb038546.scope - libcontainer container 3b05acf54e35f66bef428bad3fe4a09ce7eb65bed801efed8121deb7bb038546. Feb 13 19:42:36.580247 containerd[1467]: time="2025-02-13T19:42:36.580192691Z" level=info msg="StartContainer for \"3b05acf54e35f66bef428bad3fe4a09ce7eb65bed801efed8121deb7bb038546\" returns successfully" Feb 13 19:42:36.704810 kubelet[2656]: E0213 19:42:36.704731 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:36.761189 kubelet[2656]: E0213 19:42:36.761061 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:36.770872 kubelet[2656]: I0213 19:42:36.770784 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-556fd4d7fc-p7gx5" podStartSLOduration=0.752097855 podStartE2EDuration="3.770659363s" podCreationTimestamp="2025-02-13 19:42:33 +0000 UTC" firstStartedPulling="2025-02-13 19:42:33.44677062 +0000 UTC m=+19.819040173" lastFinishedPulling="2025-02-13 19:42:36.465332128 +0000 UTC m=+22.837601681" observedRunningTime="2025-02-13 19:42:36.770356103 +0000 UTC m=+23.142625656" watchObservedRunningTime="2025-02-13 19:42:36.770659363 +0000 UTC m=+23.142928917" Feb 13 19:42:36.849947 kubelet[2656]: E0213 19:42:36.849903 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.849947 kubelet[2656]: W0213 19:42:36.849932 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.849947 kubelet[2656]: E0213 19:42:36.849955 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.850181 kubelet[2656]: E0213 19:42:36.850147 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.850181 kubelet[2656]: W0213 19:42:36.850155 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.850181 kubelet[2656]: E0213 19:42:36.850163 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.850381 kubelet[2656]: E0213 19:42:36.850354 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.850381 kubelet[2656]: W0213 19:42:36.850366 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.850381 kubelet[2656]: E0213 19:42:36.850373 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.850591 kubelet[2656]: E0213 19:42:36.850574 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.850591 kubelet[2656]: W0213 19:42:36.850585 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.850591 kubelet[2656]: E0213 19:42:36.850592 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.850912 kubelet[2656]: E0213 19:42:36.850895 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.850912 kubelet[2656]: W0213 19:42:36.850906 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.850977 kubelet[2656]: E0213 19:42:36.850914 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.851116 kubelet[2656]: E0213 19:42:36.851096 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.851116 kubelet[2656]: W0213 19:42:36.851105 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.851116 kubelet[2656]: E0213 19:42:36.851113 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.851321 kubelet[2656]: E0213 19:42:36.851305 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.851321 kubelet[2656]: W0213 19:42:36.851314 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.851321 kubelet[2656]: E0213 19:42:36.851321 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.851558 kubelet[2656]: E0213 19:42:36.851540 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.851558 kubelet[2656]: W0213 19:42:36.851552 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.851558 kubelet[2656]: E0213 19:42:36.851561 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.851791 kubelet[2656]: E0213 19:42:36.851762 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.851791 kubelet[2656]: W0213 19:42:36.851769 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.851791 kubelet[2656]: E0213 19:42:36.851777 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.851978 kubelet[2656]: E0213 19:42:36.851952 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.851978 kubelet[2656]: W0213 19:42:36.851963 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.851978 kubelet[2656]: E0213 19:42:36.851970 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.852198 kubelet[2656]: E0213 19:42:36.852159 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.852198 kubelet[2656]: W0213 19:42:36.852166 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.852198 kubelet[2656]: E0213 19:42:36.852173 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.852492 kubelet[2656]: E0213 19:42:36.852469 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.852492 kubelet[2656]: W0213 19:42:36.852481 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.852492 kubelet[2656]: E0213 19:42:36.852488 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.852694 kubelet[2656]: E0213 19:42:36.852680 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.852694 kubelet[2656]: W0213 19:42:36.852690 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.852747 kubelet[2656]: E0213 19:42:36.852698 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.852919 kubelet[2656]: E0213 19:42:36.852906 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.852919 kubelet[2656]: W0213 19:42:36.852916 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.852965 kubelet[2656]: E0213 19:42:36.852923 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.853115 kubelet[2656]: E0213 19:42:36.853102 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.853115 kubelet[2656]: W0213 19:42:36.853112 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.853156 kubelet[2656]: E0213 19:42:36.853119 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.855632 kubelet[2656]: E0213 19:42:36.855601 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.855632 kubelet[2656]: W0213 19:42:36.855626 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.855696 kubelet[2656]: E0213 19:42:36.855649 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.855904 kubelet[2656]: E0213 19:42:36.855877 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.855904 kubelet[2656]: W0213 19:42:36.855888 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.855904 kubelet[2656]: E0213 19:42:36.855902 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.856131 kubelet[2656]: E0213 19:42:36.856113 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.856131 kubelet[2656]: W0213 19:42:36.856125 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.856197 kubelet[2656]: E0213 19:42:36.856139 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.856387 kubelet[2656]: E0213 19:42:36.856360 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.856387 kubelet[2656]: W0213 19:42:36.856371 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.856387 kubelet[2656]: E0213 19:42:36.856386 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.856603 kubelet[2656]: E0213 19:42:36.856587 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.856603 kubelet[2656]: W0213 19:42:36.856596 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.856686 kubelet[2656]: E0213 19:42:36.856608 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.857660 kubelet[2656]: E0213 19:42:36.857631 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.857660 kubelet[2656]: W0213 19:42:36.857644 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.857660 kubelet[2656]: E0213 19:42:36.857659 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.858020 kubelet[2656]: E0213 19:42:36.857988 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.858020 kubelet[2656]: W0213 19:42:36.858005 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.858020 kubelet[2656]: E0213 19:42:36.858021 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.858269 kubelet[2656]: E0213 19:42:36.858229 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.858269 kubelet[2656]: W0213 19:42:36.858241 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.858269 kubelet[2656]: E0213 19:42:36.858254 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.858519 kubelet[2656]: E0213 19:42:36.858489 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.858519 kubelet[2656]: W0213 19:42:36.858517 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.858582 kubelet[2656]: E0213 19:42:36.858535 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.858760 kubelet[2656]: E0213 19:42:36.858746 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.858760 kubelet[2656]: W0213 19:42:36.858758 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.858809 kubelet[2656]: E0213 19:42:36.858776 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.859000 kubelet[2656]: E0213 19:42:36.858986 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.859000 kubelet[2656]: W0213 19:42:36.858997 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.859050 kubelet[2656]: E0213 19:42:36.859010 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.859328 kubelet[2656]: E0213 19:42:36.859279 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.859328 kubelet[2656]: W0213 19:42:36.859318 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.859495 kubelet[2656]: E0213 19:42:36.859345 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.859613 kubelet[2656]: E0213 19:42:36.859581 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.859613 kubelet[2656]: W0213 19:42:36.859598 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.859707 kubelet[2656]: E0213 19:42:36.859685 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.859802 kubelet[2656]: E0213 19:42:36.859786 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.859802 kubelet[2656]: W0213 19:42:36.859799 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.859861 kubelet[2656]: E0213 19:42:36.859810 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.860099 kubelet[2656]: E0213 19:42:36.860079 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.860099 kubelet[2656]: W0213 19:42:36.860090 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.860156 kubelet[2656]: E0213 19:42:36.860105 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.860357 kubelet[2656]: E0213 19:42:36.860337 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.860357 kubelet[2656]: W0213 19:42:36.860352 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.860432 kubelet[2656]: E0213 19:42:36.860370 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.860631 kubelet[2656]: E0213 19:42:36.860610 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.860631 kubelet[2656]: W0213 19:42:36.860626 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.860744 kubelet[2656]: E0213 19:42:36.860644 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:36.860877 kubelet[2656]: E0213 19:42:36.860852 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:36.860877 kubelet[2656]: W0213 19:42:36.860863 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:36.860877 kubelet[2656]: E0213 19:42:36.860871 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.761703 kubelet[2656]: I0213 19:42:37.761658 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:42:37.762305 kubelet[2656]: E0213 19:42:37.762168 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:37.859861 kubelet[2656]: E0213 19:42:37.859830 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.859861 kubelet[2656]: W0213 19:42:37.859848 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.859861 kubelet[2656]: E0213 19:42:37.859868 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.860168 kubelet[2656]: E0213 19:42:37.860152 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.860168 kubelet[2656]: W0213 19:42:37.860161 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.860230 kubelet[2656]: E0213 19:42:37.860169 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.860380 kubelet[2656]: E0213 19:42:37.860369 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.860380 kubelet[2656]: W0213 19:42:37.860377 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.860432 kubelet[2656]: E0213 19:42:37.860385 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.860562 kubelet[2656]: E0213 19:42:37.860552 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.860562 kubelet[2656]: W0213 19:42:37.860560 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.860618 kubelet[2656]: E0213 19:42:37.860568 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.860741 kubelet[2656]: E0213 19:42:37.860731 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.860741 kubelet[2656]: W0213 19:42:37.860739 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.860840 kubelet[2656]: E0213 19:42:37.860746 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.860902 kubelet[2656]: E0213 19:42:37.860893 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.860902 kubelet[2656]: W0213 19:42:37.860900 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.860952 kubelet[2656]: E0213 19:42:37.860907 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.861093 kubelet[2656]: E0213 19:42:37.861083 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.861119 kubelet[2656]: W0213 19:42:37.861091 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.861119 kubelet[2656]: E0213 19:42:37.861099 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.861268 kubelet[2656]: E0213 19:42:37.861257 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.861268 kubelet[2656]: W0213 19:42:37.861265 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.861320 kubelet[2656]: E0213 19:42:37.861272 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.861465 kubelet[2656]: E0213 19:42:37.861455 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.861465 kubelet[2656]: W0213 19:42:37.861463 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.861534 kubelet[2656]: E0213 19:42:37.861470 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.861659 kubelet[2656]: E0213 19:42:37.861648 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.861659 kubelet[2656]: W0213 19:42:37.861656 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.861710 kubelet[2656]: E0213 19:42:37.861664 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.861840 kubelet[2656]: E0213 19:42:37.861830 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.861840 kubelet[2656]: W0213 19:42:37.861838 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.861885 kubelet[2656]: E0213 19:42:37.861845 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.862019 kubelet[2656]: E0213 19:42:37.862009 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.862019 kubelet[2656]: W0213 19:42:37.862017 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.862077 kubelet[2656]: E0213 19:42:37.862024 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.862219 kubelet[2656]: E0213 19:42:37.862206 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.862249 kubelet[2656]: W0213 19:42:37.862218 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.862249 kubelet[2656]: E0213 19:42:37.862229 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.862429 kubelet[2656]: E0213 19:42:37.862418 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.862429 kubelet[2656]: W0213 19:42:37.862427 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.862482 kubelet[2656]: E0213 19:42:37.862435 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.862629 kubelet[2656]: E0213 19:42:37.862619 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.862629 kubelet[2656]: W0213 19:42:37.862628 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.862681 kubelet[2656]: E0213 19:42:37.862636 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.862862 kubelet[2656]: E0213 19:42:37.862852 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.862862 kubelet[2656]: W0213 19:42:37.862861 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.862913 kubelet[2656]: E0213 19:42:37.862868 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.863062 kubelet[2656]: E0213 19:42:37.863037 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.863062 kubelet[2656]: W0213 19:42:37.863047 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.863062 kubelet[2656]: E0213 19:42:37.863060 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.863286 kubelet[2656]: E0213 19:42:37.863229 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.863286 kubelet[2656]: W0213 19:42:37.863237 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.863286 kubelet[2656]: E0213 19:42:37.863260 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.863446 kubelet[2656]: E0213 19:42:37.863431 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.863446 kubelet[2656]: W0213 19:42:37.863443 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.863493 kubelet[2656]: E0213 19:42:37.863455 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.863657 kubelet[2656]: E0213 19:42:37.863641 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.863657 kubelet[2656]: W0213 19:42:37.863653 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.863705 kubelet[2656]: E0213 19:42:37.863669 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.863854 kubelet[2656]: E0213 19:42:37.863840 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.863854 kubelet[2656]: W0213 19:42:37.863849 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.863899 kubelet[2656]: E0213 19:42:37.863862 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.864054 kubelet[2656]: E0213 19:42:37.864040 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.864054 kubelet[2656]: W0213 19:42:37.864049 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.864105 kubelet[2656]: E0213 19:42:37.864061 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.864252 kubelet[2656]: E0213 19:42:37.864228 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.864252 kubelet[2656]: W0213 19:42:37.864248 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.864313 kubelet[2656]: E0213 19:42:37.864260 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.864423 kubelet[2656]: E0213 19:42:37.864408 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.864423 kubelet[2656]: W0213 19:42:37.864419 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.864479 kubelet[2656]: E0213 19:42:37.864430 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.864626 kubelet[2656]: E0213 19:42:37.864613 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.864626 kubelet[2656]: W0213 19:42:37.864623 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.864678 kubelet[2656]: E0213 19:42:37.864634 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.864809 kubelet[2656]: E0213 19:42:37.864793 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.864809 kubelet[2656]: W0213 19:42:37.864804 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.864859 kubelet[2656]: E0213 19:42:37.864826 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.864973 kubelet[2656]: E0213 19:42:37.864960 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.864973 kubelet[2656]: W0213 19:42:37.864969 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.865026 kubelet[2656]: E0213 19:42:37.864980 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.865152 kubelet[2656]: E0213 19:42:37.865139 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.865152 kubelet[2656]: W0213 19:42:37.865149 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.865199 kubelet[2656]: E0213 19:42:37.865161 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.865742 kubelet[2656]: E0213 19:42:37.865483 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.865742 kubelet[2656]: W0213 19:42:37.865515 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.865742 kubelet[2656]: E0213 19:42:37.865524 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.865742 kubelet[2656]: E0213 19:42:37.865684 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.865742 kubelet[2656]: W0213 19:42:37.865691 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.865742 kubelet[2656]: E0213 19:42:37.865698 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.865926 kubelet[2656]: E0213 19:42:37.865905 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.865926 kubelet[2656]: W0213 19:42:37.865920 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.866006 kubelet[2656]: E0213 19:42:37.865930 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.868560 kubelet[2656]: E0213 19:42:37.868544 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.868610 kubelet[2656]: W0213 19:42:37.868556 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.868762 kubelet[2656]: E0213 19:42:37.868749 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:37.868953 kubelet[2656]: E0213 19:42:37.868941 2656 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:42:37.868953 kubelet[2656]: W0213 19:42:37.868950 2656 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:42:37.869008 kubelet[2656]: E0213 19:42:37.868958 2656 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:42:38.007881 containerd[1467]: time="2025-02-13T19:42:38.007829474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:38.008975 containerd[1467]: time="2025-02-13T19:42:38.008938598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Feb 13 19:42:38.009977 containerd[1467]: time="2025-02-13T19:42:38.009948996Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:38.011924 containerd[1467]: time="2025-02-13T19:42:38.011829089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:38.012495 containerd[1467]: time="2025-02-13T19:42:38.012452811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.546917271s" Feb 13 19:42:38.012495 containerd[1467]: time="2025-02-13T19:42:38.012482828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:42:38.018218 containerd[1467]: time="2025-02-13T19:42:38.018179022Z" level=info msg="CreateContainer within sandbox \"e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:42:38.033489 containerd[1467]: time="2025-02-13T19:42:38.033452081Z" level=info msg="CreateContainer within sandbox \"e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e\"" Feb 13 19:42:38.033987 containerd[1467]: time="2025-02-13T19:42:38.033869666Z" level=info msg="StartContainer for \"9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e\"" Feb 13 19:42:38.063635 systemd[1]: Started cri-containerd-9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e.scope - libcontainer container 9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e. Feb 13 19:42:38.092771 containerd[1467]: time="2025-02-13T19:42:38.092724700Z" level=info msg="StartContainer for \"9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e\" returns successfully" Feb 13 19:42:38.108382 systemd[1]: cri-containerd-9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e.scope: Deactivated successfully. Feb 13 19:42:38.133103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e-rootfs.mount: Deactivated successfully. Feb 13 19:42:38.696763 containerd[1467]: time="2025-02-13T19:42:38.696683823Z" level=info msg="shim disconnected" id=9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e namespace=k8s.io Feb 13 19:42:38.696763 containerd[1467]: time="2025-02-13T19:42:38.696761038Z" level=warning msg="cleaning up after shim disconnected" id=9cf435a47e08d40ceb9fb3d45cb12f653560342d4f88132fd24bab720e1dc89e namespace=k8s.io Feb 13 19:42:38.696763 containerd[1467]: time="2025-02-13T19:42:38.696771077Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:38.703537 kubelet[2656]: E0213 19:42:38.703292 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:38.764838 kubelet[2656]: E0213 19:42:38.764807 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:38.765361 containerd[1467]: time="2025-02-13T19:42:38.765329063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:42:40.703441 kubelet[2656]: E0213 19:42:40.703383 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:42.703090 kubelet[2656]: E0213 19:42:42.703030 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:42.918782 systemd[1]: Started sshd@9-10.0.0.106:22-10.0.0.1:32816.service - OpenSSH per-connection server daemon (10.0.0.1:32816). Feb 13 19:42:42.954849 sshd[3439]: Accepted publickey for core from 10.0.0.1 port 32816 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:42.956407 sshd-session[3439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:42.960343 systemd-logind[1449]: New session 10 of user core. Feb 13 19:42:42.978684 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 19:42:43.096347 sshd[3441]: Connection closed by 10.0.0.1 port 32816 Feb 13 19:42:43.096748 sshd-session[3439]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:43.100232 systemd[1]: sshd@9-10.0.0.106:22-10.0.0.1:32816.service: Deactivated successfully. Feb 13 19:42:43.102478 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 19:42:43.103894 systemd-logind[1449]: Session 10 logged out. Waiting for processes to exit. Feb 13 19:42:43.104924 systemd-logind[1449]: Removed session 10. Feb 13 19:42:44.702959 kubelet[2656]: E0213 19:42:44.702915 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:46.022577 containerd[1467]: time="2025-02-13T19:42:46.022484199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:46.023580 containerd[1467]: time="2025-02-13T19:42:46.023531214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:42:46.025109 containerd[1467]: time="2025-02-13T19:42:46.025034747Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:46.027426 containerd[1467]: time="2025-02-13T19:42:46.027394729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:46.028041 containerd[1467]: time="2025-02-13T19:42:46.028008932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 7.262642387s" Feb 13 19:42:46.028041 containerd[1467]: time="2025-02-13T19:42:46.028036303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:42:46.030284 containerd[1467]: time="2025-02-13T19:42:46.030237225Z" level=info msg="CreateContainer within sandbox \"e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:42:46.051627 containerd[1467]: time="2025-02-13T19:42:46.051573136Z" level=info msg="CreateContainer within sandbox \"e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263\"" Feb 13 19:42:46.052315 containerd[1467]: time="2025-02-13T19:42:46.052229578Z" level=info msg="StartContainer for \"79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263\"" Feb 13 19:42:46.082789 systemd[1]: Started cri-containerd-79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263.scope - libcontainer container 79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263. Feb 13 19:42:46.115657 containerd[1467]: time="2025-02-13T19:42:46.115606875Z" level=info msg="StartContainer for \"79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263\" returns successfully" Feb 13 19:42:46.703111 kubelet[2656]: E0213 19:42:46.703049 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:46.778972 kubelet[2656]: E0213 19:42:46.778931 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:47.779748 kubelet[2656]: E0213 19:42:47.779717 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:47.886793 containerd[1467]: time="2025-02-13T19:42:47.886724002Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:42:47.889189 systemd[1]: cri-containerd-79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263.scope: Deactivated successfully. Feb 13 19:42:47.909912 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263-rootfs.mount: Deactivated successfully. Feb 13 19:42:47.950114 kubelet[2656]: I0213 19:42:47.950079 2656 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Feb 13 19:42:48.112867 systemd[1]: Started sshd@10-10.0.0.106:22-10.0.0.1:44674.service - OpenSSH per-connection server daemon (10.0.0.1:44674). Feb 13 19:42:48.295674 sshd[3510]: Accepted publickey for core from 10.0.0.1 port 44674 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:48.297444 sshd-session[3510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:48.301828 systemd-logind[1449]: New session 11 of user core. Feb 13 19:42:48.315710 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 19:42:48.395355 kubelet[2656]: I0213 19:42:48.395224 2656 topology_manager.go:215] "Topology Admit Handler" podUID="abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-44gch" Feb 13 19:42:48.400256 kubelet[2656]: I0213 19:42:48.399751 2656 topology_manager.go:215] "Topology Admit Handler" podUID="61a215c3-d6e4-40f4-9806-914857a2ab1f" podNamespace="calico-apiserver" podName="calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:48.401923 kubelet[2656]: I0213 19:42:48.401892 2656 topology_manager.go:215] "Topology Admit Handler" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" podNamespace="calico-system" podName="calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:48.404106 kubelet[2656]: I0213 19:42:48.402805 2656 topology_manager.go:215] "Topology Admit Handler" podUID="9029bfbd-404a-4b40-be12-a20e64469d44" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:48.404106 kubelet[2656]: I0213 19:42:48.403014 2656 topology_manager.go:215] "Topology Admit Handler" podUID="72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff" podNamespace="calico-apiserver" podName="calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:48.408651 systemd[1]: Created slice kubepods-burstable-podabdf7ab8_742e_4339_a8d7_ab8bf1f9e1ec.slice - libcontainer container kubepods-burstable-podabdf7ab8_742e_4339_a8d7_ab8bf1f9e1ec.slice. Feb 13 19:42:48.414935 systemd[1]: Created slice kubepods-besteffort-pod61a215c3_d6e4_40f4_9806_914857a2ab1f.slice - libcontainer container kubepods-besteffort-pod61a215c3_d6e4_40f4_9806_914857a2ab1f.slice. Feb 13 19:42:48.419904 systemd[1]: Created slice kubepods-besteffort-pod98d92e3a_28ae_4220_b729_b97a4af8635e.slice - libcontainer container kubepods-besteffort-pod98d92e3a_28ae_4220_b729_b97a4af8635e.slice. Feb 13 19:42:48.424880 systemd[1]: Created slice kubepods-besteffort-pod72c9c3b8_d2cf_4ad0_a722_60cf8109d8ff.slice - libcontainer container kubepods-besteffort-pod72c9c3b8_d2cf_4ad0_a722_60cf8109d8ff.slice. Feb 13 19:42:48.429205 systemd[1]: Created slice kubepods-burstable-pod9029bfbd_404a_4b40_be12_a20e64469d44.slice - libcontainer container kubepods-burstable-pod9029bfbd_404a_4b40_be12_a20e64469d44.slice. Feb 13 19:42:48.534709 kubelet[2656]: I0213 19:42:48.534648 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf4mq\" (UniqueName: \"kubernetes.io/projected/abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec-kube-api-access-qf4mq\") pod \"coredns-7db6d8ff4d-44gch\" (UID: \"abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec\") " pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:48.534709 kubelet[2656]: I0213 19:42:48.534696 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec-config-volume\") pod \"coredns-7db6d8ff4d-44gch\" (UID: \"abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec\") " pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:48.535109 kubelet[2656]: I0213 19:42:48.534724 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2f92\" (UniqueName: \"kubernetes.io/projected/61a215c3-d6e4-40f4-9806-914857a2ab1f-kube-api-access-v2f92\") pod \"calico-apiserver-7f9ffdc98-9gqpq\" (UID: \"61a215c3-d6e4-40f4-9806-914857a2ab1f\") " pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:48.535109 kubelet[2656]: I0213 19:42:48.534749 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/98d92e3a-28ae-4220-b729-b97a4af8635e-tigera-ca-bundle\") pod \"calico-kube-controllers-78bc8bfdb7-xfh2s\" (UID: \"98d92e3a-28ae-4220-b729-b97a4af8635e\") " pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:48.535109 kubelet[2656]: I0213 19:42:48.534852 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/61a215c3-d6e4-40f4-9806-914857a2ab1f-calico-apiserver-certs\") pod \"calico-apiserver-7f9ffdc98-9gqpq\" (UID: \"61a215c3-d6e4-40f4-9806-914857a2ab1f\") " pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:48.535109 kubelet[2656]: I0213 19:42:48.534887 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9029bfbd-404a-4b40-be12-a20e64469d44-config-volume\") pod \"coredns-7db6d8ff4d-rkdrq\" (UID: \"9029bfbd-404a-4b40-be12-a20e64469d44\") " pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:48.535109 kubelet[2656]: I0213 19:42:48.534908 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7xng\" (UniqueName: \"kubernetes.io/projected/98d92e3a-28ae-4220-b729-b97a4af8635e-kube-api-access-p7xng\") pod \"calico-kube-controllers-78bc8bfdb7-xfh2s\" (UID: \"98d92e3a-28ae-4220-b729-b97a4af8635e\") " pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:48.535239 kubelet[2656]: I0213 19:42:48.534934 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7s5r\" (UniqueName: \"kubernetes.io/projected/9029bfbd-404a-4b40-be12-a20e64469d44-kube-api-access-j7s5r\") pod \"coredns-7db6d8ff4d-rkdrq\" (UID: \"9029bfbd-404a-4b40-be12-a20e64469d44\") " pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:48.535239 kubelet[2656]: I0213 19:42:48.534958 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n266r\" (UniqueName: \"kubernetes.io/projected/72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff-kube-api-access-n266r\") pod \"calico-apiserver-7f9ffdc98-tn7zc\" (UID: \"72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff\") " pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:48.535239 kubelet[2656]: I0213 19:42:48.534980 2656 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff-calico-apiserver-certs\") pod \"calico-apiserver-7f9ffdc98-tn7zc\" (UID: \"72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff\") " pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:48.576166 sshd[3512]: Connection closed by 10.0.0.1 port 44674 Feb 13 19:42:48.576542 sshd-session[3510]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:48.580199 systemd[1]: sshd@10-10.0.0.106:22-10.0.0.1:44674.service: Deactivated successfully. Feb 13 19:42:48.582249 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 19:42:48.582988 systemd-logind[1449]: Session 11 logged out. Waiting for processes to exit. Feb 13 19:42:48.584129 systemd-logind[1449]: Removed session 11. Feb 13 19:42:48.676448 containerd[1467]: time="2025-02-13T19:42:48.676266607Z" level=info msg="shim disconnected" id=79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263 namespace=k8s.io Feb 13 19:42:48.676448 containerd[1467]: time="2025-02-13T19:42:48.676339443Z" level=warning msg="cleaning up after shim disconnected" id=79c2d116c05c2ed82086133a8bcf42eee7d17aaa7200cc2a5ed800189c3d2263 namespace=k8s.io Feb 13 19:42:48.676448 containerd[1467]: time="2025-02-13T19:42:48.676351396Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:42:48.709793 systemd[1]: Created slice kubepods-besteffort-podb54cda8b_5691_4984_90d2_94b24a2518d5.slice - libcontainer container kubepods-besteffort-podb54cda8b_5691_4984_90d2_94b24a2518d5.slice. Feb 13 19:42:48.711616 kubelet[2656]: E0213 19:42:48.711575 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:48.713223 containerd[1467]: time="2025-02-13T19:42:48.712424937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:48.713223 containerd[1467]: time="2025-02-13T19:42:48.712431399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:0,}" Feb 13 19:42:48.718166 containerd[1467]: time="2025-02-13T19:42:48.718129887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:42:48.795267 kubelet[2656]: E0213 19:42:48.795230 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:48.799725 containerd[1467]: time="2025-02-13T19:42:48.799426216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:42:48.853567 containerd[1467]: time="2025-02-13T19:42:48.853486915Z" level=error msg="Failed to destroy network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.853962 containerd[1467]: time="2025-02-13T19:42:48.853926190Z" level=error msg="encountered an error cleaning up failed sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.854013 containerd[1467]: time="2025-02-13T19:42:48.853996923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.854386 kubelet[2656]: E0213 19:42:48.854329 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.854452 kubelet[2656]: E0213 19:42:48.854427 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:48.854481 kubelet[2656]: E0213 19:42:48.854463 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:48.854848 kubelet[2656]: E0213 19:42:48.854568 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-44gch" podUID="abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec" Feb 13 19:42:48.858051 containerd[1467]: time="2025-02-13T19:42:48.858002362Z" level=error msg="Failed to destroy network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.858586 containerd[1467]: time="2025-02-13T19:42:48.858554038Z" level=error msg="encountered an error cleaning up failed sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.858692 containerd[1467]: time="2025-02-13T19:42:48.858649597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.859163 containerd[1467]: time="2025-02-13T19:42:48.858903384Z" level=error msg="Failed to destroy network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.859210 kubelet[2656]: E0213 19:42:48.858900 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.859210 kubelet[2656]: E0213 19:42:48.858934 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:48.859210 kubelet[2656]: E0213 19:42:48.858952 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:48.859313 containerd[1467]: time="2025-02-13T19:42:48.859192416Z" level=error msg="encountered an error cleaning up failed sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.859313 containerd[1467]: time="2025-02-13T19:42:48.859225658Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.859368 kubelet[2656]: E0213 19:42:48.858994 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" podUID="61a215c3-d6e4-40f4-9806-914857a2ab1f" Feb 13 19:42:48.859417 kubelet[2656]: E0213 19:42:48.859368 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:48.859441 kubelet[2656]: E0213 19:42:48.859425 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:48.859466 kubelet[2656]: E0213 19:42:48.859444 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:48.859533 kubelet[2656]: E0213 19:42:48.859490 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:49.024046 containerd[1467]: time="2025-02-13T19:42:49.023906494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:0,}" Feb 13 19:42:49.027544 containerd[1467]: time="2025-02-13T19:42:49.027515488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:0,}" Feb 13 19:42:49.031741 kubelet[2656]: E0213 19:42:49.031703 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:49.032098 containerd[1467]: time="2025-02-13T19:42:49.032067494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:0,}" Feb 13 19:42:49.128715 containerd[1467]: time="2025-02-13T19:42:49.128648344Z" level=error msg="Failed to destroy network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.129964 containerd[1467]: time="2025-02-13T19:42:49.129240506Z" level=error msg="encountered an error cleaning up failed sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.129964 containerd[1467]: time="2025-02-13T19:42:49.129321147Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.130117 kubelet[2656]: E0213 19:42:49.129696 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.130117 kubelet[2656]: E0213 19:42:49.129761 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:49.130117 kubelet[2656]: E0213 19:42:49.129791 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:49.130242 kubelet[2656]: E0213 19:42:49.129868 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" Feb 13 19:42:49.130414 containerd[1467]: time="2025-02-13T19:42:49.130382449Z" level=error msg="Failed to destroy network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.130889 containerd[1467]: time="2025-02-13T19:42:49.130860537Z" level=error msg="encountered an error cleaning up failed sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.131034 containerd[1467]: time="2025-02-13T19:42:49.130995330Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.132062 kubelet[2656]: E0213 19:42:49.132011 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.132129 kubelet[2656]: E0213 19:42:49.132079 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:49.132129 kubelet[2656]: E0213 19:42:49.132105 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:49.132193 kubelet[2656]: E0213 19:42:49.132157 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rkdrq" podUID="9029bfbd-404a-4b40-be12-a20e64469d44" Feb 13 19:42:49.132911 containerd[1467]: time="2025-02-13T19:42:49.132810137Z" level=error msg="Failed to destroy network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.133306 containerd[1467]: time="2025-02-13T19:42:49.133252307Z" level=error msg="encountered an error cleaning up failed sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.133367 containerd[1467]: time="2025-02-13T19:42:49.133308723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.133560 kubelet[2656]: E0213 19:42:49.133459 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:49.133644 kubelet[2656]: E0213 19:42:49.133622 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:49.133694 kubelet[2656]: E0213 19:42:49.133648 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:49.133727 kubelet[2656]: E0213 19:42:49.133705 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" podUID="72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff" Feb 13 19:42:49.796945 kubelet[2656]: I0213 19:42:49.796906 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4" Feb 13 19:42:49.797741 containerd[1467]: time="2025-02-13T19:42:49.797705393Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:42:49.797975 containerd[1467]: time="2025-02-13T19:42:49.797937067Z" level=info msg="Ensure that sandbox ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4 in task-service has been cleanup successfully" Feb 13 19:42:49.798173 containerd[1467]: time="2025-02-13T19:42:49.798149617Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:42:49.798173 containerd[1467]: time="2025-02-13T19:42:49.798162751Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:42:49.798789 containerd[1467]: time="2025-02-13T19:42:49.798666918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:1,}" Feb 13 19:42:49.798885 kubelet[2656]: I0213 19:42:49.798859 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9" Feb 13 19:42:49.799305 containerd[1467]: time="2025-02-13T19:42:49.799282754Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:42:49.799598 containerd[1467]: time="2025-02-13T19:42:49.799560905Z" level=info msg="Ensure that sandbox 5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9 in task-service has been cleanup successfully" Feb 13 19:42:49.799855 containerd[1467]: time="2025-02-13T19:42:49.799746735Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:42:49.799855 containerd[1467]: time="2025-02-13T19:42:49.799763056Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:42:49.800087 kubelet[2656]: E0213 19:42:49.800056 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:49.800602 containerd[1467]: time="2025-02-13T19:42:49.800297359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:1,}" Feb 13 19:42:49.800678 kubelet[2656]: I0213 19:42:49.800346 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e" Feb 13 19:42:49.800712 containerd[1467]: time="2025-02-13T19:42:49.800693692Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:42:49.800946 containerd[1467]: time="2025-02-13T19:42:49.800913405Z" level=info msg="Ensure that sandbox 03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e in task-service has been cleanup successfully" Feb 13 19:42:49.801149 containerd[1467]: time="2025-02-13T19:42:49.801102540Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:42:49.801149 containerd[1467]: time="2025-02-13T19:42:49.801127887Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:42:49.801602 containerd[1467]: time="2025-02-13T19:42:49.801568524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:42:49.801641 kubelet[2656]: I0213 19:42:49.801599 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c" Feb 13 19:42:49.802092 containerd[1467]: time="2025-02-13T19:42:49.802052814Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:42:49.802286 containerd[1467]: time="2025-02-13T19:42:49.802260082Z" level=info msg="Ensure that sandbox 1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c in task-service has been cleanup successfully" Feb 13 19:42:49.802482 containerd[1467]: time="2025-02-13T19:42:49.802403653Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:42:49.802482 containerd[1467]: time="2025-02-13T19:42:49.802424942Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:42:49.802915 containerd[1467]: time="2025-02-13T19:42:49.802891337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:1,}" Feb 13 19:42:49.803185 kubelet[2656]: I0213 19:42:49.803151 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911" Feb 13 19:42:49.803691 containerd[1467]: time="2025-02-13T19:42:49.803599698Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:42:49.803826 containerd[1467]: time="2025-02-13T19:42:49.803780326Z" level=info msg="Ensure that sandbox 3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911 in task-service has been cleanup successfully" Feb 13 19:42:49.803980 containerd[1467]: time="2025-02-13T19:42:49.803955745Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:42:49.804064 containerd[1467]: time="2025-02-13T19:42:49.803976574Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:42:49.804157 kubelet[2656]: I0213 19:42:49.804137 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5" Feb 13 19:42:49.804426 containerd[1467]: time="2025-02-13T19:42:49.804404027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:1,}" Feb 13 19:42:49.804556 containerd[1467]: time="2025-02-13T19:42:49.804486622Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:42:49.804712 containerd[1467]: time="2025-02-13T19:42:49.804693441Z" level=info msg="Ensure that sandbox 15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5 in task-service has been cleanup successfully" Feb 13 19:42:49.804846 containerd[1467]: time="2025-02-13T19:42:49.804829195Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:42:49.804846 containerd[1467]: time="2025-02-13T19:42:49.804843713Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:42:49.805105 kubelet[2656]: E0213 19:42:49.805070 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:49.805298 containerd[1467]: time="2025-02-13T19:42:49.805276866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:1,}" Feb 13 19:42:49.909745 systemd[1]: run-netns-cni\x2db5e703fd\x2df127\x2d526e\x2ddc52\x2d2448c6e093d5.mount: Deactivated successfully. Feb 13 19:42:49.909858 systemd[1]: run-netns-cni\x2d73ee5a7c\x2dee0d\x2d7183\x2dfc8d\x2db3b74f74d3fc.mount: Deactivated successfully. Feb 13 19:42:49.909935 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5-shm.mount: Deactivated successfully. Feb 13 19:42:49.910027 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e-shm.mount: Deactivated successfully. Feb 13 19:42:49.910103 systemd[1]: run-netns-cni\x2d30f36218\x2db351\x2dee38\x2d98a3\x2ddb34314d7d3d.mount: Deactivated successfully. Feb 13 19:42:49.910173 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c-shm.mount: Deactivated successfully. Feb 13 19:42:49.910247 systemd[1]: run-netns-cni\x2d44f8b804\x2def78\x2ddeb5\x2d3020\x2dc830c7919511.mount: Deactivated successfully. Feb 13 19:42:49.910329 systemd[1]: run-netns-cni\x2db6bd54d2\x2d5f35\x2de919\x2d3674\x2d6236b557c707.mount: Deactivated successfully. Feb 13 19:42:49.910404 systemd[1]: run-netns-cni\x2d3690d753\x2dfc45\x2dcb20\x2db2db\x2dfaba97e87924.mount: Deactivated successfully. Feb 13 19:42:50.315872 containerd[1467]: time="2025-02-13T19:42:50.315811581Z" level=error msg="Failed to destroy network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.316489 containerd[1467]: time="2025-02-13T19:42:50.316466841Z" level=error msg="Failed to destroy network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.316938 containerd[1467]: time="2025-02-13T19:42:50.316913730Z" level=error msg="encountered an error cleaning up failed sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.317056 containerd[1467]: time="2025-02-13T19:42:50.317002787Z" level=error msg="Failed to destroy network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.317402 containerd[1467]: time="2025-02-13T19:42:50.317075404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.317579 containerd[1467]: time="2025-02-13T19:42:50.317529175Z" level=error msg="encountered an error cleaning up failed sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.317667 kubelet[2656]: E0213 19:42:50.317601 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.317752 kubelet[2656]: E0213 19:42:50.317685 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:50.317752 kubelet[2656]: E0213 19:42:50.317713 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:50.317828 containerd[1467]: time="2025-02-13T19:42:50.317574791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.317916 kubelet[2656]: E0213 19:42:50.317762 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-44gch" podUID="abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec" Feb 13 19:42:50.319220 kubelet[2656]: E0213 19:42:50.318082 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.319220 kubelet[2656]: E0213 19:42:50.318116 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:50.319220 kubelet[2656]: E0213 19:42:50.318135 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:50.319351 kubelet[2656]: E0213 19:42:50.318168 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" Feb 13 19:42:50.319948 containerd[1467]: time="2025-02-13T19:42:50.319918831Z" level=error msg="encountered an error cleaning up failed sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.320062 containerd[1467]: time="2025-02-13T19:42:50.320043125Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.320311 kubelet[2656]: E0213 19:42:50.320276 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.320375 kubelet[2656]: E0213 19:42:50.320320 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:50.320375 kubelet[2656]: E0213 19:42:50.320342 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:50.320422 kubelet[2656]: E0213 19:42:50.320374 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" podUID="61a215c3-d6e4-40f4-9806-914857a2ab1f" Feb 13 19:42:50.326567 containerd[1467]: time="2025-02-13T19:42:50.326492380Z" level=error msg="Failed to destroy network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.327037 containerd[1467]: time="2025-02-13T19:42:50.327001306Z" level=error msg="encountered an error cleaning up failed sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.327115 containerd[1467]: time="2025-02-13T19:42:50.327083660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.327440 kubelet[2656]: E0213 19:42:50.327367 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.327566 kubelet[2656]: E0213 19:42:50.327446 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:50.327566 kubelet[2656]: E0213 19:42:50.327470 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:50.327566 kubelet[2656]: E0213 19:42:50.327532 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rkdrq" podUID="9029bfbd-404a-4b40-be12-a20e64469d44" Feb 13 19:42:50.329932 containerd[1467]: time="2025-02-13T19:42:50.329870642Z" level=error msg="Failed to destroy network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.330848 containerd[1467]: time="2025-02-13T19:42:50.330370850Z" level=error msg="encountered an error cleaning up failed sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.330848 containerd[1467]: time="2025-02-13T19:42:50.330439269Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.331202 kubelet[2656]: E0213 19:42:50.330681 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.331202 kubelet[2656]: E0213 19:42:50.330739 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:50.331202 kubelet[2656]: E0213 19:42:50.330759 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:50.331345 kubelet[2656]: E0213 19:42:50.330799 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:50.335844 containerd[1467]: time="2025-02-13T19:42:50.335791465Z" level=error msg="Failed to destroy network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.336258 containerd[1467]: time="2025-02-13T19:42:50.336226302Z" level=error msg="encountered an error cleaning up failed sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.336317 containerd[1467]: time="2025-02-13T19:42:50.336292105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.336540 kubelet[2656]: E0213 19:42:50.336459 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:50.336540 kubelet[2656]: E0213 19:42:50.336513 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:50.336540 kubelet[2656]: E0213 19:42:50.336530 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:50.336715 kubelet[2656]: E0213 19:42:50.336563 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" podUID="72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff" Feb 13 19:42:50.808533 kubelet[2656]: I0213 19:42:50.808376 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646" Feb 13 19:42:50.809072 containerd[1467]: time="2025-02-13T19:42:50.809003480Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:42:50.809270 containerd[1467]: time="2025-02-13T19:42:50.809250855Z" level=info msg="Ensure that sandbox f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646 in task-service has been cleanup successfully" Feb 13 19:42:50.809754 containerd[1467]: time="2025-02-13T19:42:50.809670071Z" level=info msg="TearDown network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" successfully" Feb 13 19:42:50.809754 containerd[1467]: time="2025-02-13T19:42:50.809691291Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" returns successfully" Feb 13 19:42:50.810607 containerd[1467]: time="2025-02-13T19:42:50.810584208Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:42:50.810874 containerd[1467]: time="2025-02-13T19:42:50.810779584Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:42:50.810874 containerd[1467]: time="2025-02-13T19:42:50.810797818Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:42:50.811678 kubelet[2656]: I0213 19:42:50.811259 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d" Feb 13 19:42:50.811890 containerd[1467]: time="2025-02-13T19:42:50.811871714Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:42:50.812287 containerd[1467]: time="2025-02-13T19:42:50.811922019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:42:50.812287 containerd[1467]: time="2025-02-13T19:42:50.812162360Z" level=info msg="Ensure that sandbox 4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d in task-service has been cleanup successfully" Feb 13 19:42:50.812516 containerd[1467]: time="2025-02-13T19:42:50.812484565Z" level=info msg="TearDown network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" successfully" Feb 13 19:42:50.812575 containerd[1467]: time="2025-02-13T19:42:50.812562471Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" returns successfully" Feb 13 19:42:50.813195 containerd[1467]: time="2025-02-13T19:42:50.813152909Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:42:50.813334 containerd[1467]: time="2025-02-13T19:42:50.813247928Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:42:50.813334 containerd[1467]: time="2025-02-13T19:42:50.813261584Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:42:50.813826 kubelet[2656]: E0213 19:42:50.813591 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:50.814085 containerd[1467]: time="2025-02-13T19:42:50.814047959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:2,}" Feb 13 19:42:50.814668 kubelet[2656]: I0213 19:42:50.814630 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb" Feb 13 19:42:50.815307 containerd[1467]: time="2025-02-13T19:42:50.815269282Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:42:50.815719 containerd[1467]: time="2025-02-13T19:42:50.815532577Z" level=info msg="Ensure that sandbox 0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb in task-service has been cleanup successfully" Feb 13 19:42:50.815856 containerd[1467]: time="2025-02-13T19:42:50.815835014Z" level=info msg="TearDown network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" successfully" Feb 13 19:42:50.815932 containerd[1467]: time="2025-02-13T19:42:50.815907560Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" returns successfully" Feb 13 19:42:50.816345 containerd[1467]: time="2025-02-13T19:42:50.816214116Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:42:50.816526 containerd[1467]: time="2025-02-13T19:42:50.816437555Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:42:50.816690 containerd[1467]: time="2025-02-13T19:42:50.816582717Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:42:50.816736 kubelet[2656]: I0213 19:42:50.816678 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e" Feb 13 19:42:50.817914 containerd[1467]: time="2025-02-13T19:42:50.817768302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:2,}" Feb 13 19:42:50.818279 containerd[1467]: time="2025-02-13T19:42:50.818245428Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:42:50.818618 containerd[1467]: time="2025-02-13T19:42:50.818439373Z" level=info msg="Ensure that sandbox b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e in task-service has been cleanup successfully" Feb 13 19:42:50.819130 containerd[1467]: time="2025-02-13T19:42:50.819106395Z" level=info msg="TearDown network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" successfully" Feb 13 19:42:50.819453 containerd[1467]: time="2025-02-13T19:42:50.819192987Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" returns successfully" Feb 13 19:42:50.820926 containerd[1467]: time="2025-02-13T19:42:50.820817907Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:42:50.821145 containerd[1467]: time="2025-02-13T19:42:50.821075911Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:42:50.821145 containerd[1467]: time="2025-02-13T19:42:50.821091661Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:42:50.821221 kubelet[2656]: I0213 19:42:50.821160 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2" Feb 13 19:42:50.821712 containerd[1467]: time="2025-02-13T19:42:50.821646052Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:42:50.821923 containerd[1467]: time="2025-02-13T19:42:50.821831730Z" level=info msg="Ensure that sandbox 0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2 in task-service has been cleanup successfully" Feb 13 19:42:50.821993 kubelet[2656]: E0213 19:42:50.821753 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:50.823160 containerd[1467]: time="2025-02-13T19:42:50.822561811Z" level=info msg="TearDown network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" successfully" Feb 13 19:42:50.823160 containerd[1467]: time="2025-02-13T19:42:50.822582299Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" returns successfully" Feb 13 19:42:50.823160 containerd[1467]: time="2025-02-13T19:42:50.822727802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:2,}" Feb 13 19:42:50.823160 containerd[1467]: time="2025-02-13T19:42:50.823076266Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:42:50.824914 containerd[1467]: time="2025-02-13T19:42:50.823161456Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:42:50.824914 containerd[1467]: time="2025-02-13T19:42:50.823185191Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:42:50.824914 containerd[1467]: time="2025-02-13T19:42:50.824578075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:2,}" Feb 13 19:42:50.825353 kubelet[2656]: I0213 19:42:50.825195 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e" Feb 13 19:42:50.827558 containerd[1467]: time="2025-02-13T19:42:50.827531649Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:42:50.829781 containerd[1467]: time="2025-02-13T19:42:50.829745916Z" level=info msg="Ensure that sandbox 05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e in task-service has been cleanup successfully" Feb 13 19:42:50.830244 containerd[1467]: time="2025-02-13T19:42:50.830212742Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:42:50.830244 containerd[1467]: time="2025-02-13T19:42:50.830236467Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:42:50.832740 containerd[1467]: time="2025-02-13T19:42:50.830956829Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:42:50.832740 containerd[1467]: time="2025-02-13T19:42:50.831080831Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:42:50.832740 containerd[1467]: time="2025-02-13T19:42:50.831094897Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:42:50.835893 containerd[1467]: time="2025-02-13T19:42:50.835858550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:2,}" Feb 13 19:42:50.919761 systemd[1]: run-netns-cni\x2d84129c10\x2d2100\x2db41e\x2dd659\x2dda9a7bf6fd90.mount: Deactivated successfully. Feb 13 19:42:50.919963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d-shm.mount: Deactivated successfully. Feb 13 19:42:50.920052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646-shm.mount: Deactivated successfully. Feb 13 19:42:50.920126 systemd[1]: run-netns-cni\x2dd60c09cd\x2d6e89\x2d7331\x2d24d0\x2dc25b62a261f3.mount: Deactivated successfully. Feb 13 19:42:50.920196 systemd[1]: run-netns-cni\x2d48ca0b19\x2dd5a9\x2d17ab\x2de117\x2ddead5bf37500.mount: Deactivated successfully. Feb 13 19:42:50.920272 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e-shm.mount: Deactivated successfully. Feb 13 19:42:50.920346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e-shm.mount: Deactivated successfully. Feb 13 19:42:50.920422 systemd[1]: run-netns-cni\x2d55dd5bd5\x2d6f72\x2dea5a\x2d481f\x2d5b17b6a342ee.mount: Deactivated successfully. Feb 13 19:42:50.920492 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb-shm.mount: Deactivated successfully. Feb 13 19:42:50.920583 systemd[1]: run-netns-cni\x2d3f6200b9\x2d491e\x2df9be\x2df0e1\x2d189f21bb3fd0.mount: Deactivated successfully. Feb 13 19:42:50.920653 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2-shm.mount: Deactivated successfully. Feb 13 19:42:51.043927 containerd[1467]: time="2025-02-13T19:42:51.043849510Z" level=error msg="Failed to destroy network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.048457 containerd[1467]: time="2025-02-13T19:42:51.048404951Z" level=error msg="encountered an error cleaning up failed sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.050386 containerd[1467]: time="2025-02-13T19:42:51.048752342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.052488 kubelet[2656]: E0213 19:42:51.050827 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.052488 kubelet[2656]: E0213 19:42:51.050927 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:51.052488 kubelet[2656]: E0213 19:42:51.050998 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:51.052786 kubelet[2656]: E0213 19:42:51.051081 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:51.119834 containerd[1467]: time="2025-02-13T19:42:51.119681765Z" level=error msg="Failed to destroy network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.121336 containerd[1467]: time="2025-02-13T19:42:51.121301526Z" level=error msg="Failed to destroy network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.126296 containerd[1467]: time="2025-02-13T19:42:51.126239123Z" level=error msg="encountered an error cleaning up failed sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.126406 containerd[1467]: time="2025-02-13T19:42:51.126340223Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.126704 kubelet[2656]: E0213 19:42:51.126642 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.126756 kubelet[2656]: E0213 19:42:51.126722 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:51.126832 kubelet[2656]: E0213 19:42:51.126753 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:51.126832 kubelet[2656]: E0213 19:42:51.126807 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-44gch" podUID="abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec" Feb 13 19:42:51.127281 containerd[1467]: time="2025-02-13T19:42:51.127165972Z" level=error msg="encountered an error cleaning up failed sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.127281 containerd[1467]: time="2025-02-13T19:42:51.127216567Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.127731 kubelet[2656]: E0213 19:42:51.127534 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.127731 kubelet[2656]: E0213 19:42:51.127605 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:51.127731 kubelet[2656]: E0213 19:42:51.127630 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:51.127833 kubelet[2656]: E0213 19:42:51.127686 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" podUID="61a215c3-d6e4-40f4-9806-914857a2ab1f" Feb 13 19:42:51.129140 containerd[1467]: time="2025-02-13T19:42:51.128663162Z" level=error msg="Failed to destroy network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.132162 containerd[1467]: time="2025-02-13T19:42:51.132110123Z" level=error msg="encountered an error cleaning up failed sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.132274 containerd[1467]: time="2025-02-13T19:42:51.132212925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.132547 kubelet[2656]: E0213 19:42:51.132471 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.132603 kubelet[2656]: E0213 19:42:51.132564 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:51.132659 kubelet[2656]: E0213 19:42:51.132601 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:51.132736 kubelet[2656]: E0213 19:42:51.132662 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" podUID="72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff" Feb 13 19:42:51.134363 containerd[1467]: time="2025-02-13T19:42:51.134311635Z" level=error msg="Failed to destroy network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.135060 containerd[1467]: time="2025-02-13T19:42:51.134853551Z" level=error msg="encountered an error cleaning up failed sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.135060 containerd[1467]: time="2025-02-13T19:42:51.134925297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.135155 kubelet[2656]: E0213 19:42:51.135127 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.135243 kubelet[2656]: E0213 19:42:51.135211 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:51.135382 kubelet[2656]: E0213 19:42:51.135349 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:51.135436 kubelet[2656]: E0213 19:42:51.135406 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rkdrq" podUID="9029bfbd-404a-4b40-be12-a20e64469d44" Feb 13 19:42:51.143078 containerd[1467]: time="2025-02-13T19:42:51.143009621Z" level=error msg="Failed to destroy network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.144781 containerd[1467]: time="2025-02-13T19:42:51.144676609Z" level=error msg="encountered an error cleaning up failed sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.144781 containerd[1467]: time="2025-02-13T19:42:51.144768401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.146982 kubelet[2656]: E0213 19:42:51.145016 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:51.146982 kubelet[2656]: E0213 19:42:51.145065 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:51.146982 kubelet[2656]: E0213 19:42:51.145086 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:51.147450 kubelet[2656]: E0213 19:42:51.145129 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" Feb 13 19:42:51.828548 kubelet[2656]: I0213 19:42:51.828495 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f" Feb 13 19:42:51.829112 containerd[1467]: time="2025-02-13T19:42:51.829047061Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" Feb 13 19:42:51.829388 containerd[1467]: time="2025-02-13T19:42:51.829262816Z" level=info msg="Ensure that sandbox 3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f in task-service has been cleanup successfully" Feb 13 19:42:51.829685 containerd[1467]: time="2025-02-13T19:42:51.829439829Z" level=info msg="TearDown network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" successfully" Feb 13 19:42:51.829685 containerd[1467]: time="2025-02-13T19:42:51.829455899Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" returns successfully" Feb 13 19:42:51.829780 containerd[1467]: time="2025-02-13T19:42:51.829746484Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:42:51.829918 containerd[1467]: time="2025-02-13T19:42:51.829853975Z" level=info msg="TearDown network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" successfully" Feb 13 19:42:51.829918 containerd[1467]: time="2025-02-13T19:42:51.829908788Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" returns successfully" Feb 13 19:42:51.830549 containerd[1467]: time="2025-02-13T19:42:51.830389180Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:42:51.830549 containerd[1467]: time="2025-02-13T19:42:51.830486412Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:42:51.830549 containerd[1467]: time="2025-02-13T19:42:51.830509375Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:42:51.830905 kubelet[2656]: I0213 19:42:51.830873 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3" Feb 13 19:42:51.831144 containerd[1467]: time="2025-02-13T19:42:51.831107458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:3,}" Feb 13 19:42:51.831393 containerd[1467]: time="2025-02-13T19:42:51.831349683Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" Feb 13 19:42:51.831555 containerd[1467]: time="2025-02-13T19:42:51.831525653Z" level=info msg="Ensure that sandbox e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3 in task-service has been cleanup successfully" Feb 13 19:42:51.831733 containerd[1467]: time="2025-02-13T19:42:51.831676005Z" level=info msg="TearDown network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" successfully" Feb 13 19:42:51.831811 containerd[1467]: time="2025-02-13T19:42:51.831731159Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" returns successfully" Feb 13 19:42:51.832043 containerd[1467]: time="2025-02-13T19:42:51.832015282Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:42:51.832150 containerd[1467]: time="2025-02-13T19:42:51.832110511Z" level=info msg="TearDown network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" successfully" Feb 13 19:42:51.832150 containerd[1467]: time="2025-02-13T19:42:51.832125048Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" returns successfully" Feb 13 19:42:51.832438 containerd[1467]: time="2025-02-13T19:42:51.832411325Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:42:51.832571 containerd[1467]: time="2025-02-13T19:42:51.832492017Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:42:51.832571 containerd[1467]: time="2025-02-13T19:42:51.832538445Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:42:51.832741 kubelet[2656]: E0213 19:42:51.832713 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:51.833011 containerd[1467]: time="2025-02-13T19:42:51.832987757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:3,}" Feb 13 19:42:51.833220 kubelet[2656]: I0213 19:42:51.833195 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a" Feb 13 19:42:51.833597 containerd[1467]: time="2025-02-13T19:42:51.833558629Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" Feb 13 19:42:51.833717 containerd[1467]: time="2025-02-13T19:42:51.833702088Z" level=info msg="Ensure that sandbox 93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a in task-service has been cleanup successfully" Feb 13 19:42:51.833894 containerd[1467]: time="2025-02-13T19:42:51.833840047Z" level=info msg="TearDown network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" successfully" Feb 13 19:42:51.833894 containerd[1467]: time="2025-02-13T19:42:51.833854505Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" returns successfully" Feb 13 19:42:51.834233 containerd[1467]: time="2025-02-13T19:42:51.834182410Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:42:51.834440 containerd[1467]: time="2025-02-13T19:42:51.834418223Z" level=info msg="TearDown network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" successfully" Feb 13 19:42:51.834440 containerd[1467]: time="2025-02-13T19:42:51.834434864Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" returns successfully" Feb 13 19:42:51.834668 kubelet[2656]: I0213 19:42:51.834652 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba" Feb 13 19:42:51.835217 containerd[1467]: time="2025-02-13T19:42:51.835070858Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:42:51.835217 containerd[1467]: time="2025-02-13T19:42:51.835106755Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:42:51.835217 containerd[1467]: time="2025-02-13T19:42:51.835200070Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:42:51.835217 containerd[1467]: time="2025-02-13T19:42:51.835210460Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:42:51.835328 containerd[1467]: time="2025-02-13T19:42:51.835219186Z" level=info msg="Ensure that sandbox acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba in task-service has been cleanup successfully" Feb 13 19:42:51.835559 containerd[1467]: time="2025-02-13T19:42:51.835540909Z" level=info msg="TearDown network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" successfully" Feb 13 19:42:51.835559 containerd[1467]: time="2025-02-13T19:42:51.835556689Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" returns successfully" Feb 13 19:42:51.835636 containerd[1467]: time="2025-02-13T19:42:51.835594339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:42:51.836180 containerd[1467]: time="2025-02-13T19:42:51.836040627Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:42:51.836180 containerd[1467]: time="2025-02-13T19:42:51.836121068Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:42:51.836180 containerd[1467]: time="2025-02-13T19:42:51.836130566Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:42:51.836518 containerd[1467]: time="2025-02-13T19:42:51.836401746Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:42:51.836518 containerd[1467]: time="2025-02-13T19:42:51.836485092Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:42:51.836614 containerd[1467]: time="2025-02-13T19:42:51.836494560Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:42:51.836805 kubelet[2656]: I0213 19:42:51.836778 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461" Feb 13 19:42:51.836990 containerd[1467]: time="2025-02-13T19:42:51.836953101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:3,}" Feb 13 19:42:51.837223 containerd[1467]: time="2025-02-13T19:42:51.837195896Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" Feb 13 19:42:51.837413 containerd[1467]: time="2025-02-13T19:42:51.837393396Z" level=info msg="Ensure that sandbox 476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461 in task-service has been cleanup successfully" Feb 13 19:42:51.837632 containerd[1467]: time="2025-02-13T19:42:51.837575799Z" level=info msg="TearDown network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" successfully" Feb 13 19:42:51.837632 containerd[1467]: time="2025-02-13T19:42:51.837593282Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" returns successfully" Feb 13 19:42:51.837972 containerd[1467]: time="2025-02-13T19:42:51.837943208Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:42:51.838040 containerd[1467]: time="2025-02-13T19:42:51.838024451Z" level=info msg="TearDown network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" successfully" Feb 13 19:42:51.838040 containerd[1467]: time="2025-02-13T19:42:51.838034800Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" returns successfully" Feb 13 19:42:51.838232 containerd[1467]: time="2025-02-13T19:42:51.838206593Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:42:51.838295 kubelet[2656]: I0213 19:42:51.838241 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec" Feb 13 19:42:51.838335 containerd[1467]: time="2025-02-13T19:42:51.838288216Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:42:51.838335 containerd[1467]: time="2025-02-13T19:42:51.838298576Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:42:51.838758 containerd[1467]: time="2025-02-13T19:42:51.838632091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:3,}" Feb 13 19:42:51.838758 containerd[1467]: time="2025-02-13T19:42:51.838672257Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" Feb 13 19:42:51.838828 containerd[1467]: time="2025-02-13T19:42:51.838817710Z" level=info msg="Ensure that sandbox d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec in task-service has been cleanup successfully" Feb 13 19:42:51.838953 containerd[1467]: time="2025-02-13T19:42:51.838935040Z" level=info msg="TearDown network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" successfully" Feb 13 19:42:51.838953 containerd[1467]: time="2025-02-13T19:42:51.838950920Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" returns successfully" Feb 13 19:42:51.839209 containerd[1467]: time="2025-02-13T19:42:51.839182585Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:42:51.839297 containerd[1467]: time="2025-02-13T19:42:51.839257916Z" level=info msg="TearDown network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" successfully" Feb 13 19:42:51.839297 containerd[1467]: time="2025-02-13T19:42:51.839278075Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" returns successfully" Feb 13 19:42:51.839542 containerd[1467]: time="2025-02-13T19:42:51.839520069Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:42:51.839614 containerd[1467]: time="2025-02-13T19:42:51.839599267Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:42:51.839647 containerd[1467]: time="2025-02-13T19:42:51.839612522Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:42:51.839802 kubelet[2656]: E0213 19:42:51.839777 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:51.840003 containerd[1467]: time="2025-02-13T19:42:51.839977998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:3,}" Feb 13 19:42:51.909950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a-shm.mount: Deactivated successfully. Feb 13 19:42:51.910080 systemd[1]: run-netns-cni\x2d29a5e2e5\x2dcb83\x2d0efb\x2df2ee\x2d5ac436ccfbdc.mount: Deactivated successfully. Feb 13 19:42:51.910153 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba-shm.mount: Deactivated successfully. Feb 13 19:42:51.910232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3-shm.mount: Deactivated successfully. Feb 13 19:42:51.910306 systemd[1]: run-netns-cni\x2d129c23a2\x2d7826\x2d5f86\x2dce1a\x2dfa3d18324ac1.mount: Deactivated successfully. Feb 13 19:42:51.910378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f-shm.mount: Deactivated successfully. Feb 13 19:42:51.910462 systemd[1]: run-netns-cni\x2d2bd016c7\x2d33a7\x2d0f80\x2de0bc\x2d370ad53ff7ab.mount: Deactivated successfully. Feb 13 19:42:51.910546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461-shm.mount: Deactivated successfully. Feb 13 19:42:51.910621 systemd[1]: run-netns-cni\x2d084f6e7f\x2df597\x2d3fe9\x2de240\x2db6d84150eb14.mount: Deactivated successfully. Feb 13 19:42:51.910692 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec-shm.mount: Deactivated successfully. Feb 13 19:42:52.565084 containerd[1467]: time="2025-02-13T19:42:52.565019653Z" level=error msg="Failed to destroy network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.566286 containerd[1467]: time="2025-02-13T19:42:52.566160134Z" level=error msg="Failed to destroy network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.568971 containerd[1467]: time="2025-02-13T19:42:52.568722113Z" level=error msg="encountered an error cleaning up failed sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.568971 containerd[1467]: time="2025-02-13T19:42:52.568834032Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.569218 kubelet[2656]: E0213 19:42:52.569172 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.569341 kubelet[2656]: E0213 19:42:52.569247 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:52.569341 kubelet[2656]: E0213 19:42:52.569269 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:52.569341 kubelet[2656]: E0213 19:42:52.569316 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-44gch" podUID="abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec" Feb 13 19:42:52.569883 containerd[1467]: time="2025-02-13T19:42:52.569852163Z" level=error msg="encountered an error cleaning up failed sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.570017 containerd[1467]: time="2025-02-13T19:42:52.569989010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.570262 kubelet[2656]: E0213 19:42:52.570235 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.570396 kubelet[2656]: E0213 19:42:52.570373 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:52.570585 kubelet[2656]: E0213 19:42:52.570469 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:52.570585 kubelet[2656]: E0213 19:42:52.570544 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" podUID="61a215c3-d6e4-40f4-9806-914857a2ab1f" Feb 13 19:42:52.577072 containerd[1467]: time="2025-02-13T19:42:52.577016399Z" level=error msg="Failed to destroy network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.579087 containerd[1467]: time="2025-02-13T19:42:52.578962722Z" level=error msg="encountered an error cleaning up failed sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.579270 containerd[1467]: time="2025-02-13T19:42:52.579249932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.579855 kubelet[2656]: E0213 19:42:52.579664 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.579855 kubelet[2656]: E0213 19:42:52.579742 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:52.579855 kubelet[2656]: E0213 19:42:52.579763 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:52.579995 kubelet[2656]: E0213 19:42:52.579808 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" podUID="72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff" Feb 13 19:42:52.599026 containerd[1467]: time="2025-02-13T19:42:52.598963151Z" level=error msg="Failed to destroy network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.600050 containerd[1467]: time="2025-02-13T19:42:52.599724871Z" level=error msg="encountered an error cleaning up failed sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.600050 containerd[1467]: time="2025-02-13T19:42:52.599806344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.600212 kubelet[2656]: E0213 19:42:52.600102 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.600212 kubelet[2656]: E0213 19:42:52.600177 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:52.600212 kubelet[2656]: E0213 19:42:52.600200 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:52.600347 kubelet[2656]: E0213 19:42:52.600261 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" Feb 13 19:42:52.605958 containerd[1467]: time="2025-02-13T19:42:52.605751602Z" level=error msg="Failed to destroy network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.606816 containerd[1467]: time="2025-02-13T19:42:52.606774010Z" level=error msg="encountered an error cleaning up failed sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.606889 containerd[1467]: time="2025-02-13T19:42:52.606858469Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.607533 kubelet[2656]: E0213 19:42:52.607136 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.607533 kubelet[2656]: E0213 19:42:52.607217 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:52.607533 kubelet[2656]: E0213 19:42:52.607253 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:52.607788 kubelet[2656]: E0213 19:42:52.607313 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rkdrq" podUID="9029bfbd-404a-4b40-be12-a20e64469d44" Feb 13 19:42:52.624717 containerd[1467]: time="2025-02-13T19:42:52.624645372Z" level=error msg="Failed to destroy network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.625187 containerd[1467]: time="2025-02-13T19:42:52.625154468Z" level=error msg="encountered an error cleaning up failed sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.625269 containerd[1467]: time="2025-02-13T19:42:52.625239057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.625592 kubelet[2656]: E0213 19:42:52.625528 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:52.625592 kubelet[2656]: E0213 19:42:52.625609 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:52.625800 kubelet[2656]: E0213 19:42:52.625633 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:52.625800 kubelet[2656]: E0213 19:42:52.625682 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:52.844239 kubelet[2656]: I0213 19:42:52.844115 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e" Feb 13 19:42:52.845179 containerd[1467]: time="2025-02-13T19:42:52.845122810Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\"" Feb 13 19:42:52.845446 containerd[1467]: time="2025-02-13T19:42:52.845393158Z" level=info msg="Ensure that sandbox b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e in task-service has been cleanup successfully" Feb 13 19:42:52.845969 containerd[1467]: time="2025-02-13T19:42:52.845704552Z" level=info msg="TearDown network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" successfully" Feb 13 19:42:52.845969 containerd[1467]: time="2025-02-13T19:42:52.845723668Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" returns successfully" Feb 13 19:42:52.847680 containerd[1467]: time="2025-02-13T19:42:52.847642990Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" Feb 13 19:42:52.847816 containerd[1467]: time="2025-02-13T19:42:52.847768726Z" level=info msg="TearDown network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" successfully" Feb 13 19:42:52.847816 containerd[1467]: time="2025-02-13T19:42:52.847780929Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" returns successfully" Feb 13 19:42:52.848345 containerd[1467]: time="2025-02-13T19:42:52.848307718Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:42:52.848486 containerd[1467]: time="2025-02-13T19:42:52.848437391Z" level=info msg="TearDown network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" successfully" Feb 13 19:42:52.848486 containerd[1467]: time="2025-02-13T19:42:52.848448642Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" returns successfully" Feb 13 19:42:52.848565 kubelet[2656]: I0213 19:42:52.848535 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c" Feb 13 19:42:52.848834 containerd[1467]: time="2025-02-13T19:42:52.848792558Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:42:52.848891 containerd[1467]: time="2025-02-13T19:42:52.848874431Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:42:52.848918 containerd[1467]: time="2025-02-13T19:42:52.848888327Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:42:52.849095 kubelet[2656]: E0213 19:42:52.849064 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:52.849262 containerd[1467]: time="2025-02-13T19:42:52.849220010Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\"" Feb 13 19:42:52.849413 containerd[1467]: time="2025-02-13T19:42:52.849319337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:4,}" Feb 13 19:42:52.849450 containerd[1467]: time="2025-02-13T19:42:52.849419294Z" level=info msg="Ensure that sandbox 5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c in task-service has been cleanup successfully" Feb 13 19:42:52.849994 containerd[1467]: time="2025-02-13T19:42:52.849851155Z" level=info msg="TearDown network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" successfully" Feb 13 19:42:52.849994 containerd[1467]: time="2025-02-13T19:42:52.849870131Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" returns successfully" Feb 13 19:42:52.850534 containerd[1467]: time="2025-02-13T19:42:52.850416697Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" Feb 13 19:42:52.850534 containerd[1467]: time="2025-02-13T19:42:52.850495374Z" level=info msg="TearDown network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" successfully" Feb 13 19:42:52.850534 containerd[1467]: time="2025-02-13T19:42:52.850529528Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" returns successfully" Feb 13 19:42:52.851121 containerd[1467]: time="2025-02-13T19:42:52.851100360Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:42:52.851204 containerd[1467]: time="2025-02-13T19:42:52.851184447Z" level=info msg="TearDown network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" successfully" Feb 13 19:42:52.851204 containerd[1467]: time="2025-02-13T19:42:52.851198794Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" returns successfully" Feb 13 19:42:52.851592 containerd[1467]: time="2025-02-13T19:42:52.851542509Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:42:52.852119 containerd[1467]: time="2025-02-13T19:42:52.851651393Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:42:52.852119 containerd[1467]: time="2025-02-13T19:42:52.851669136Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:42:52.852119 containerd[1467]: time="2025-02-13T19:42:52.851999657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:42:52.852210 kubelet[2656]: I0213 19:42:52.851815 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde" Feb 13 19:42:52.852763 containerd[1467]: time="2025-02-13T19:42:52.852737161Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" Feb 13 19:42:52.852998 containerd[1467]: time="2025-02-13T19:42:52.852970038Z" level=info msg="Ensure that sandbox 113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde in task-service has been cleanup successfully" Feb 13 19:42:52.853381 containerd[1467]: time="2025-02-13T19:42:52.853297012Z" level=info msg="TearDown network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" successfully" Feb 13 19:42:52.853381 containerd[1467]: time="2025-02-13T19:42:52.853316439Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" returns successfully" Feb 13 19:42:52.853729 containerd[1467]: time="2025-02-13T19:42:52.853696331Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:42:52.853824 containerd[1467]: time="2025-02-13T19:42:52.853804776Z" level=info msg="TearDown network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" successfully" Feb 13 19:42:52.853824 containerd[1467]: time="2025-02-13T19:42:52.853820665Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" returns successfully" Feb 13 19:42:52.854008 containerd[1467]: time="2025-02-13T19:42:52.853985104Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:42:52.854051 kubelet[2656]: I0213 19:42:52.854000 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de" Feb 13 19:42:52.854115 containerd[1467]: time="2025-02-13T19:42:52.854063641Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:42:52.854115 containerd[1467]: time="2025-02-13T19:42:52.854073279Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:42:52.854479 containerd[1467]: time="2025-02-13T19:42:52.854443564Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:42:52.854585 containerd[1467]: time="2025-02-13T19:42:52.854562237Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:42:52.854585 containerd[1467]: time="2025-02-13T19:42:52.854580461Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:42:52.854658 containerd[1467]: time="2025-02-13T19:42:52.854638400Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\"" Feb 13 19:42:52.854894 containerd[1467]: time="2025-02-13T19:42:52.854842422Z" level=info msg="Ensure that sandbox d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de in task-service has been cleanup successfully" Feb 13 19:42:52.855340 containerd[1467]: time="2025-02-13T19:42:52.855054891Z" level=info msg="TearDown network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" successfully" Feb 13 19:42:52.855340 containerd[1467]: time="2025-02-13T19:42:52.855072044Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" returns successfully" Feb 13 19:42:52.855340 containerd[1467]: time="2025-02-13T19:42:52.855158887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:4,}" Feb 13 19:42:52.855464 containerd[1467]: time="2025-02-13T19:42:52.855438972Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" Feb 13 19:42:52.855591 containerd[1467]: time="2025-02-13T19:42:52.855570940Z" level=info msg="TearDown network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" successfully" Feb 13 19:42:52.855591 containerd[1467]: time="2025-02-13T19:42:52.855587711Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" returns successfully" Feb 13 19:42:52.856224 containerd[1467]: time="2025-02-13T19:42:52.856012449Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:42:52.856304 containerd[1467]: time="2025-02-13T19:42:52.856280802Z" level=info msg="TearDown network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" successfully" Feb 13 19:42:52.856304 containerd[1467]: time="2025-02-13T19:42:52.856299487Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" returns successfully" Feb 13 19:42:52.856955 containerd[1467]: time="2025-02-13T19:42:52.856922256Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:42:52.857046 containerd[1467]: time="2025-02-13T19:42:52.857023366Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:42:52.857046 containerd[1467]: time="2025-02-13T19:42:52.857040828Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:42:52.857340 kubelet[2656]: I0213 19:42:52.857316 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee" Feb 13 19:42:52.857862 containerd[1467]: time="2025-02-13T19:42:52.857836603Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\"" Feb 13 19:42:52.857900 containerd[1467]: time="2025-02-13T19:42:52.857865948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:4,}" Feb 13 19:42:52.860225 kubelet[2656]: I0213 19:42:52.860192 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d" Feb 13 19:42:52.861234 containerd[1467]: time="2025-02-13T19:42:52.860953672Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\"" Feb 13 19:42:52.861234 containerd[1467]: time="2025-02-13T19:42:52.861172574Z" level=info msg="Ensure that sandbox db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d in task-service has been cleanup successfully" Feb 13 19:42:52.861441 containerd[1467]: time="2025-02-13T19:42:52.861405911Z" level=info msg="TearDown network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" successfully" Feb 13 19:42:52.861441 containerd[1467]: time="2025-02-13T19:42:52.861418445Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" returns successfully" Feb 13 19:42:52.862266 containerd[1467]: time="2025-02-13T19:42:52.861708510Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" Feb 13 19:42:52.862266 containerd[1467]: time="2025-02-13T19:42:52.861996009Z" level=info msg="TearDown network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" successfully" Feb 13 19:42:52.862266 containerd[1467]: time="2025-02-13T19:42:52.862008242Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" returns successfully" Feb 13 19:42:52.862679 containerd[1467]: time="2025-02-13T19:42:52.862290611Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:42:52.862679 containerd[1467]: time="2025-02-13T19:42:52.862382575Z" level=info msg="TearDown network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" successfully" Feb 13 19:42:52.862679 containerd[1467]: time="2025-02-13T19:42:52.862393986Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" returns successfully" Feb 13 19:42:52.862679 containerd[1467]: time="2025-02-13T19:42:52.862633736Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:42:52.862788 containerd[1467]: time="2025-02-13T19:42:52.862720499Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:42:52.862788 containerd[1467]: time="2025-02-13T19:42:52.862731068Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:42:52.863348 containerd[1467]: time="2025-02-13T19:42:52.863254832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:4,}" Feb 13 19:42:52.864528 containerd[1467]: time="2025-02-13T19:42:52.864319209Z" level=info msg="Ensure that sandbox e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee in task-service has been cleanup successfully" Feb 13 19:42:52.864610 containerd[1467]: time="2025-02-13T19:42:52.864582554Z" level=info msg="TearDown network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" successfully" Feb 13 19:42:52.864610 containerd[1467]: time="2025-02-13T19:42:52.864606649Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" returns successfully" Feb 13 19:42:52.864978 containerd[1467]: time="2025-02-13T19:42:52.864952287Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" Feb 13 19:42:52.865068 containerd[1467]: time="2025-02-13T19:42:52.865042217Z" level=info msg="TearDown network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" successfully" Feb 13 19:42:52.865068 containerd[1467]: time="2025-02-13T19:42:52.865065821Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" returns successfully" Feb 13 19:42:52.865413 containerd[1467]: time="2025-02-13T19:42:52.865393295Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:42:52.865687 containerd[1467]: time="2025-02-13T19:42:52.865668973Z" level=info msg="TearDown network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" successfully" Feb 13 19:42:52.865842 containerd[1467]: time="2025-02-13T19:42:52.865825837Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" returns successfully" Feb 13 19:42:52.866513 containerd[1467]: time="2025-02-13T19:42:52.866465628Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:42:52.866594 containerd[1467]: time="2025-02-13T19:42:52.866573100Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:42:52.866620 containerd[1467]: time="2025-02-13T19:42:52.866590442Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:42:52.866778 kubelet[2656]: E0213 19:42:52.866757 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:52.867022 containerd[1467]: time="2025-02-13T19:42:52.866997897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:4,}" Feb 13 19:42:52.913718 systemd[1]: run-netns-cni\x2d7ece3054\x2d4b5f\x2d1d08\x2d48d6\x2d046aeae08bd4.mount: Deactivated successfully. Feb 13 19:42:52.914271 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e-shm.mount: Deactivated successfully. Feb 13 19:42:52.914480 systemd[1]: run-netns-cni\x2de9105dbb\x2d08e0\x2ddefc\x2d4ecd\x2d9e1035f5ae40.mount: Deactivated successfully. Feb 13 19:42:52.914675 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de-shm.mount: Deactivated successfully. Feb 13 19:42:52.914760 systemd[1]: run-netns-cni\x2d77ddc21b\x2d59ab\x2de760\x2d1c26\x2de82285b8cc45.mount: Deactivated successfully. Feb 13 19:42:52.914832 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c-shm.mount: Deactivated successfully. Feb 13 19:42:53.221987 containerd[1467]: time="2025-02-13T19:42:53.221719592Z" level=error msg="Failed to destroy network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.244391 containerd[1467]: time="2025-02-13T19:42:53.244309909Z" level=error msg="encountered an error cleaning up failed sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.244547 containerd[1467]: time="2025-02-13T19:42:53.244431838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.244820 kubelet[2656]: E0213 19:42:53.244764 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.244941 kubelet[2656]: E0213 19:42:53.244846 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:53.244941 kubelet[2656]: E0213 19:42:53.244877 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:53.245012 kubelet[2656]: E0213 19:42:53.244935 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" podUID="61a215c3-d6e4-40f4-9806-914857a2ab1f" Feb 13 19:42:53.272048 containerd[1467]: time="2025-02-13T19:42:53.271985930Z" level=error msg="Failed to destroy network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.272449 containerd[1467]: time="2025-02-13T19:42:53.272336980Z" level=error msg="Failed to destroy network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.274852 containerd[1467]: time="2025-02-13T19:42:53.272961842Z" level=error msg="encountered an error cleaning up failed sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.274852 containerd[1467]: time="2025-02-13T19:42:53.273029309Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.275009 kubelet[2656]: E0213 19:42:53.273350 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.275009 kubelet[2656]: E0213 19:42:53.273549 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:53.275009 kubelet[2656]: E0213 19:42:53.273580 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:53.275144 kubelet[2656]: E0213 19:42:53.273622 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" podUID="72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff" Feb 13 19:42:53.276381 containerd[1467]: time="2025-02-13T19:42:53.275570829Z" level=error msg="encountered an error cleaning up failed sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.276381 containerd[1467]: time="2025-02-13T19:42:53.275691405Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.276556 kubelet[2656]: E0213 19:42:53.276206 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.276556 kubelet[2656]: E0213 19:42:53.276270 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:53.276556 kubelet[2656]: E0213 19:42:53.276290 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:53.276643 kubelet[2656]: E0213 19:42:53.276341 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-44gch" podUID="abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec" Feb 13 19:42:53.287840 containerd[1467]: time="2025-02-13T19:42:53.287766157Z" level=error msg="Failed to destroy network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.288858 containerd[1467]: time="2025-02-13T19:42:53.288826576Z" level=error msg="encountered an error cleaning up failed sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.289550 containerd[1467]: time="2025-02-13T19:42:53.288888883Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.289593 kubelet[2656]: E0213 19:42:53.289137 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.289593 kubelet[2656]: E0213 19:42:53.289203 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:53.289593 kubelet[2656]: E0213 19:42:53.289229 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:53.289693 kubelet[2656]: E0213 19:42:53.289276 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:53.293476 containerd[1467]: time="2025-02-13T19:42:53.293432912Z" level=error msg="Failed to destroy network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.294022 containerd[1467]: time="2025-02-13T19:42:53.293945695Z" level=error msg="encountered an error cleaning up failed sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.294114 containerd[1467]: time="2025-02-13T19:42:53.294060119Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.294521 kubelet[2656]: E0213 19:42:53.294259 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:53.294521 kubelet[2656]: E0213 19:42:53.294319 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:53.294521 kubelet[2656]: E0213 19:42:53.294340 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:53.294637 kubelet[2656]: E0213 19:42:53.294385 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" Feb 13 19:42:53.601331 systemd[1]: Started sshd@11-10.0.0.106:22-10.0.0.1:44688.service - OpenSSH per-connection server daemon (10.0.0.1:44688). Feb 13 19:42:53.657870 sshd[4615]: Accepted publickey for core from 10.0.0.1 port 44688 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:53.659374 sshd-session[4615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:53.665444 systemd-logind[1449]: New session 12 of user core. Feb 13 19:42:53.675869 systemd[1]: Started session-12.scope - Session 12 of User core. Feb 13 19:42:53.799746 sshd[4617]: Connection closed by 10.0.0.1 port 44688 Feb 13 19:42:53.800123 sshd-session[4615]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:53.804282 systemd[1]: sshd@11-10.0.0.106:22-10.0.0.1:44688.service: Deactivated successfully. Feb 13 19:42:53.804480 systemd-logind[1449]: Session 12 logged out. Waiting for processes to exit. Feb 13 19:42:53.808322 systemd[1]: session-12.scope: Deactivated successfully. Feb 13 19:42:53.813326 systemd-logind[1449]: Removed session 12. Feb 13 19:42:53.865612 kubelet[2656]: I0213 19:42:53.865493 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2" Feb 13 19:42:53.866529 containerd[1467]: time="2025-02-13T19:42:53.866463962Z" level=info msg="StopPodSandbox for \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.866812346Z" level=info msg="Ensure that sandbox d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2 in task-service has been cleanup successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.867002332Z" level=info msg="TearDown network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.867015277Z" level=info msg="StopPodSandbox for \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.867646912Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.867761417Z" level=info msg="TearDown network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.867771857Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.868217503Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.868330364Z" level=info msg="TearDown network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.868341946Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.868868214Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.869678184Z" level=info msg="TearDown network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.869691890Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.869907395Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.870062917Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.870100397Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.870458929Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.870952005Z" level=info msg="Ensure that sandbox 65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f in task-service has been cleanup successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.870993803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.871490115Z" level=info msg="TearDown network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.871548795Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.872042422Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.872245783Z" level=info msg="TearDown network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.872260310Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.872657837Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.872781639Z" level=info msg="TearDown network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.872793291Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.873162113Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.873864862Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.873901911Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.874163432Z" level=info msg="StopPodSandbox for \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.874511315Z" level=info msg="Ensure that sandbox 38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308 in task-service has been cleanup successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.874654143Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.874703265Z" level=info msg="TearDown network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.874718994Z" level=info msg="StopPodSandbox for \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.874749732Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.874762967Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.875277362Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\"" Feb 13 19:42:53.969123 containerd[1467]: time="2025-02-13T19:42:53.875394753Z" level=info msg="TearDown network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" successfully" Feb 13 19:42:53.913470 systemd[1]: run-netns-cni\x2d2bd926d7\x2db0d8\x2d7054\x2d321d\x2d9ecdf49dbd20.mount: Deactivated successfully. Feb 13 19:42:53.970531 kubelet[2656]: I0213 19:42:53.870034 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f" Feb 13 19:42:53.970531 kubelet[2656]: I0213 19:42:53.873330 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308" Feb 13 19:42:53.970531 kubelet[2656]: I0213 19:42:53.878833 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50" Feb 13 19:42:53.970531 kubelet[2656]: E0213 19:42:53.881768 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:53.970531 kubelet[2656]: I0213 19:42:53.909463 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.875406525Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.875397037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:5,}" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.875922143Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.875998826Z" level=info msg="TearDown network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.876009216Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.877138045Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.877240046Z" level=info msg="TearDown network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.877249794Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.877470689Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.877574493Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.877585916Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.877976328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:5,}" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.879143629Z" level=info msg="StopPodSandbox for \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.879323016Z" level=info msg="Ensure that sandbox 7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50 in task-service has been cleanup successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.879671469Z" level=info msg="TearDown network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.879682230Z" level=info msg="StopPodSandbox for \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.880453447Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.880547033Z" level=info msg="TearDown network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.880561821Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.880851975Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.880922527Z" level=info msg="TearDown network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.880931324Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.881195360Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.881263147Z" level=info msg="TearDown network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.881272254Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.881456089Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.881537311Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.881546318Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.882629652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:5,}" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.909950446Z" level=info msg="StopPodSandbox for \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.910173004Z" level=info msg="Ensure that sandbox dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e in task-service has been cleanup successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.910408436Z" level=info msg="TearDown network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.910421140Z" level=info msg="StopPodSandbox for \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.910854002Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\"" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.910931317Z" level=info msg="TearDown network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.910940104Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" returns successfully" Feb 13 19:42:53.970670 containerd[1467]: time="2025-02-13T19:42:53.911112738Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" Feb 13 19:42:53.913608 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308-shm.mount: Deactivated successfully. Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911176046Z" level=info msg="TearDown network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" successfully" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911185494Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" returns successfully" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911373977Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911447005Z" level=info msg="TearDown network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" successfully" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911456253Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" returns successfully" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911680082Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911750024Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.911762928Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:42:53.973336 containerd[1467]: time="2025-02-13T19:42:53.912099169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:5,}" Feb 13 19:42:53.913701 systemd[1]: run-netns-cni\x2d903816f8\x2d2992\x2dffb2\x2df3ce\x2d8d5cc01fecd6.mount: Deactivated successfully. Feb 13 19:42:53.913776 systemd[1]: run-netns-cni\x2d7e559152\x2d1e2e\x2df333\x2d8c4f\x2d802aaa5a5497.mount: Deactivated successfully. Feb 13 19:42:53.913990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2-shm.mount: Deactivated successfully. Feb 13 19:42:53.914077 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f-shm.mount: Deactivated successfully. Feb 13 19:42:53.914159 systemd[1]: run-netns-cni\x2d3a1466a5\x2da079\x2d5af5\x2da77b\x2d0d08aeffa202.mount: Deactivated successfully. Feb 13 19:42:53.914257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50-shm.mount: Deactivated successfully. Feb 13 19:42:53.914531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e-shm.mount: Deactivated successfully. Feb 13 19:42:53.919952 systemd[1]: run-netns-cni\x2d427e988d\x2d0692\x2d6b9f\x2d63b1\x2d1be4d81a6990.mount: Deactivated successfully. Feb 13 19:42:54.028610 containerd[1467]: time="2025-02-13T19:42:54.028547304Z" level=error msg="Failed to destroy network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:54.029056 containerd[1467]: time="2025-02-13T19:42:54.029027264Z" level=error msg="encountered an error cleaning up failed sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:54.029130 containerd[1467]: time="2025-02-13T19:42:54.029096654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:54.031373 kubelet[2656]: E0213 19:42:54.031274 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:54.031373 kubelet[2656]: E0213 19:42:54.031341 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:54.031373 kubelet[2656]: E0213 19:42:54.031367 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:54.031551 kubelet[2656]: E0213 19:42:54.031417 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rkdrq" podUID="9029bfbd-404a-4b40-be12-a20e64469d44" Feb 13 19:42:54.031597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8-shm.mount: Deactivated successfully. Feb 13 19:42:54.763913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277529814.mount: Deactivated successfully. Feb 13 19:42:54.919965 kubelet[2656]: I0213 19:42:54.919927 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8" Feb 13 19:42:54.920915 containerd[1467]: time="2025-02-13T19:42:54.920530706Z" level=info msg="StopPodSandbox for \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\"" Feb 13 19:42:54.920915 containerd[1467]: time="2025-02-13T19:42:54.920764365Z" level=info msg="Ensure that sandbox 9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8 in task-service has been cleanup successfully" Feb 13 19:42:54.921343 containerd[1467]: time="2025-02-13T19:42:54.921274081Z" level=info msg="TearDown network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" successfully" Feb 13 19:42:54.921343 containerd[1467]: time="2025-02-13T19:42:54.921291013Z" level=info msg="StopPodSandbox for \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" returns successfully" Feb 13 19:42:54.923751 systemd[1]: run-netns-cni\x2d2b4aa4e7\x2d8dda\x2d12d0\x2d0657\x2d74a1d28c06b2.mount: Deactivated successfully. Feb 13 19:42:54.923879 containerd[1467]: time="2025-02-13T19:42:54.923779824Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\"" Feb 13 19:42:54.923879 containerd[1467]: time="2025-02-13T19:42:54.923855376Z" level=info msg="TearDown network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" successfully" Feb 13 19:42:54.923879 containerd[1467]: time="2025-02-13T19:42:54.923864383Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" returns successfully" Feb 13 19:42:54.924298 containerd[1467]: time="2025-02-13T19:42:54.924158685Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" Feb 13 19:42:54.924298 containerd[1467]: time="2025-02-13T19:42:54.924242231Z" level=info msg="TearDown network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" successfully" Feb 13 19:42:54.924298 containerd[1467]: time="2025-02-13T19:42:54.924251308Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" returns successfully" Feb 13 19:42:54.924529 containerd[1467]: time="2025-02-13T19:42:54.924486150Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:42:54.924625 containerd[1467]: time="2025-02-13T19:42:54.924576950Z" level=info msg="TearDown network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" successfully" Feb 13 19:42:54.924625 containerd[1467]: time="2025-02-13T19:42:54.924620832Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" returns successfully" Feb 13 19:42:54.925028 containerd[1467]: time="2025-02-13T19:42:54.924986518Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:42:54.925492 containerd[1467]: time="2025-02-13T19:42:54.925063422Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:42:54.925492 containerd[1467]: time="2025-02-13T19:42:54.925088780Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:42:54.925634 kubelet[2656]: E0213 19:42:54.925283 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:54.925692 containerd[1467]: time="2025-02-13T19:42:54.925595271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:5,}" Feb 13 19:42:55.630311 containerd[1467]: time="2025-02-13T19:42:55.630254383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:55.645446 containerd[1467]: time="2025-02-13T19:42:55.645388483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:42:55.722382 kubelet[2656]: I0213 19:42:55.722210 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:42:55.723232 kubelet[2656]: E0213 19:42:55.722868 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:55.826352 containerd[1467]: time="2025-02-13T19:42:55.826277360Z" level=error msg="Failed to destroy network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.826801 containerd[1467]: time="2025-02-13T19:42:55.826768752Z" level=error msg="encountered an error cleaning up failed sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.826863 containerd[1467]: time="2025-02-13T19:42:55.826839375Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.827166 kubelet[2656]: E0213 19:42:55.827124 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.827235 kubelet[2656]: E0213 19:42:55.827197 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:55.827235 kubelet[2656]: E0213 19:42:55.827220 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:55.827349 kubelet[2656]: E0213 19:42:55.827269 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" Feb 13 19:42:55.923269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d-shm.mount: Deactivated successfully. Feb 13 19:42:55.926357 kubelet[2656]: I0213 19:42:55.926323 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d" Feb 13 19:42:55.928178 kubelet[2656]: E0213 19:42:55.928112 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:55.928378 containerd[1467]: time="2025-02-13T19:42:55.928326175Z" level=info msg="StopPodSandbox for \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\"" Feb 13 19:42:55.929813 containerd[1467]: time="2025-02-13T19:42:55.929346129Z" level=info msg="Ensure that sandbox babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d in task-service has been cleanup successfully" Feb 13 19:42:55.929813 containerd[1467]: time="2025-02-13T19:42:55.929651152Z" level=info msg="TearDown network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" successfully" Feb 13 19:42:55.929813 containerd[1467]: time="2025-02-13T19:42:55.929669296Z" level=info msg="StopPodSandbox for \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" returns successfully" Feb 13 19:42:55.930810 containerd[1467]: time="2025-02-13T19:42:55.930767346Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\"" Feb 13 19:42:55.931206 containerd[1467]: time="2025-02-13T19:42:55.931030891Z" level=info msg="TearDown network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" successfully" Feb 13 19:42:55.931206 containerd[1467]: time="2025-02-13T19:42:55.931044777Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" returns successfully" Feb 13 19:42:55.931830 containerd[1467]: time="2025-02-13T19:42:55.931608575Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" Feb 13 19:42:55.931830 containerd[1467]: time="2025-02-13T19:42:55.931693594Z" level=info msg="TearDown network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" successfully" Feb 13 19:42:55.931830 containerd[1467]: time="2025-02-13T19:42:55.931702300Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" returns successfully" Feb 13 19:42:55.932133 containerd[1467]: time="2025-02-13T19:42:55.932082033Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:42:55.932358 containerd[1467]: time="2025-02-13T19:42:55.932343604Z" level=info msg="TearDown network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" successfully" Feb 13 19:42:55.932470 containerd[1467]: time="2025-02-13T19:42:55.932433753Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" returns successfully" Feb 13 19:42:55.932809 containerd[1467]: time="2025-02-13T19:42:55.932777068Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:42:55.932930 containerd[1467]: time="2025-02-13T19:42:55.932907082Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:42:55.932997 containerd[1467]: time="2025-02-13T19:42:55.932930045Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:42:55.933669 systemd[1]: run-netns-cni\x2db9d0a866\x2d4861\x2ddc7c\x2d55c0\x2dfe663e086156.mount: Deactivated successfully. Feb 13 19:42:55.934463 containerd[1467]: time="2025-02-13T19:42:55.933936072Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:42:55.934463 containerd[1467]: time="2025-02-13T19:42:55.934071897Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:42:55.934463 containerd[1467]: time="2025-02-13T19:42:55.934084801Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:42:55.935125 containerd[1467]: time="2025-02-13T19:42:55.935087384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:6,}" Feb 13 19:42:55.973106 containerd[1467]: time="2025-02-13T19:42:55.973046778Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:55.975846 containerd[1467]: time="2025-02-13T19:42:55.975783904Z" level=error msg="Failed to destroy network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.976228 containerd[1467]: time="2025-02-13T19:42:55.976196589Z" level=error msg="encountered an error cleaning up failed sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.976319 containerd[1467]: time="2025-02-13T19:42:55.976296596Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.976588 kubelet[2656]: E0213 19:42:55.976554 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:55.976806 kubelet[2656]: E0213 19:42:55.976691 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:55.976806 kubelet[2656]: E0213 19:42:55.976721 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" Feb 13 19:42:55.976806 kubelet[2656]: E0213 19:42:55.976765 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-9gqpq_calico-apiserver(61a215c3-d6e4-40f4-9806-914857a2ab1f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" podUID="61a215c3-d6e4-40f4-9806-914857a2ab1f" Feb 13 19:42:55.978201 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3-shm.mount: Deactivated successfully. Feb 13 19:42:56.107616 containerd[1467]: time="2025-02-13T19:42:56.107541984Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:56.113024 containerd[1467]: time="2025-02-13T19:42:56.112972795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.31349942s" Feb 13 19:42:56.113271 containerd[1467]: time="2025-02-13T19:42:56.113249594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:42:56.123629 containerd[1467]: time="2025-02-13T19:42:56.123466406Z" level=error msg="Failed to destroy network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.124339 containerd[1467]: time="2025-02-13T19:42:56.124315219Z" level=error msg="encountered an error cleaning up failed sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.124712 containerd[1467]: time="2025-02-13T19:42:56.124656340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.124922 kubelet[2656]: E0213 19:42:56.124885 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.124997 kubelet[2656]: E0213 19:42:56.124944 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:56.124997 kubelet[2656]: E0213 19:42:56.124966 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qxpzd" Feb 13 19:42:56.125511 kubelet[2656]: E0213 19:42:56.125007 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qxpzd_calico-system(b54cda8b-5691-4984-90d2-94b24a2518d5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qxpzd" podUID="b54cda8b-5691-4984-90d2-94b24a2518d5" Feb 13 19:42:56.130670 containerd[1467]: time="2025-02-13T19:42:56.130605153Z" level=info msg="CreateContainer within sandbox \"e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:42:56.156049 containerd[1467]: time="2025-02-13T19:42:56.155911119Z" level=info msg="CreateContainer within sandbox \"e7be764b69eae9216048706d224d970f51f0235ff5a81160a6d531eaf3007730\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4e1bdad70d4c056acce6da05ec235eef6f63937e93bd57401ae2ce0931de0b3b\"" Feb 13 19:42:56.158525 containerd[1467]: time="2025-02-13T19:42:56.158422492Z" level=info msg="StartContainer for \"4e1bdad70d4c056acce6da05ec235eef6f63937e93bd57401ae2ce0931de0b3b\"" Feb 13 19:42:56.180321 containerd[1467]: time="2025-02-13T19:42:56.180139245Z" level=error msg="Failed to destroy network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.180899 containerd[1467]: time="2025-02-13T19:42:56.180646887Z" level=error msg="encountered an error cleaning up failed sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.180899 containerd[1467]: time="2025-02-13T19:42:56.180813830Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.184658 kubelet[2656]: E0213 19:42:56.181459 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.184658 kubelet[2656]: E0213 19:42:56.181591 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:56.184658 kubelet[2656]: E0213 19:42:56.181619 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-44gch" Feb 13 19:42:56.184983 kubelet[2656]: E0213 19:42:56.181688 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-44gch_kube-system(abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-44gch" podUID="abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec" Feb 13 19:42:56.187000 containerd[1467]: time="2025-02-13T19:42:56.186948943Z" level=error msg="Failed to destroy network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.187410 containerd[1467]: time="2025-02-13T19:42:56.187381665Z" level=error msg="encountered an error cleaning up failed sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.187480 containerd[1467]: time="2025-02-13T19:42:56.187454181Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:5,} failed, error" error="failed to setup network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.188542 kubelet[2656]: E0213 19:42:56.188284 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.188542 kubelet[2656]: E0213 19:42:56.188358 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:56.188542 kubelet[2656]: E0213 19:42:56.188381 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" Feb 13 19:42:56.188717 kubelet[2656]: E0213 19:42:56.188438 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7f9ffdc98-tn7zc_calico-apiserver(72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" podUID="72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff" Feb 13 19:42:56.193339 containerd[1467]: time="2025-02-13T19:42:56.193260587Z" level=error msg="Failed to destroy network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.193821 containerd[1467]: time="2025-02-13T19:42:56.193779551Z" level=error msg="encountered an error cleaning up failed sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.194132 containerd[1467]: time="2025-02-13T19:42:56.194103218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.194519 kubelet[2656]: E0213 19:42:56.194461 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.194668 kubelet[2656]: E0213 19:42:56.194644 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:56.194763 kubelet[2656]: E0213 19:42:56.194743 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rkdrq" Feb 13 19:42:56.195543 kubelet[2656]: E0213 19:42:56.195432 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rkdrq_kube-system(9029bfbd-404a-4b40-be12-a20e64469d44)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rkdrq" podUID="9029bfbd-404a-4b40-be12-a20e64469d44" Feb 13 19:42:56.218137 containerd[1467]: time="2025-02-13T19:42:56.218028484Z" level=error msg="Failed to destroy network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.219095 containerd[1467]: time="2025-02-13T19:42:56.218605327Z" level=error msg="encountered an error cleaning up failed sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.219095 containerd[1467]: time="2025-02-13T19:42:56.218688684Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.219325 kubelet[2656]: E0213 19:42:56.218992 2656 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:42:56.219325 kubelet[2656]: E0213 19:42:56.219061 2656 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:56.219325 kubelet[2656]: E0213 19:42:56.219090 2656 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" Feb 13 19:42:56.219475 kubelet[2656]: E0213 19:42:56.219144 2656 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78bc8bfdb7-xfh2s_calico-system(98d92e3a-28ae-4220-b729-b97a4af8635e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podUID="98d92e3a-28ae-4220-b729-b97a4af8635e" Feb 13 19:42:56.248756 systemd[1]: Started cri-containerd-4e1bdad70d4c056acce6da05ec235eef6f63937e93bd57401ae2ce0931de0b3b.scope - libcontainer container 4e1bdad70d4c056acce6da05ec235eef6f63937e93bd57401ae2ce0931de0b3b. Feb 13 19:42:56.355668 containerd[1467]: time="2025-02-13T19:42:56.355612705Z" level=info msg="StartContainer for \"4e1bdad70d4c056acce6da05ec235eef6f63937e93bd57401ae2ce0931de0b3b\" returns successfully" Feb 13 19:42:56.362970 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:42:56.363621 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:42:56.928728 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae-shm.mount: Deactivated successfully. Feb 13 19:42:56.929200 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753-shm.mount: Deactivated successfully. Feb 13 19:42:56.929285 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e-shm.mount: Deactivated successfully. Feb 13 19:42:56.929364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d-shm.mount: Deactivated successfully. Feb 13 19:42:56.932254 kubelet[2656]: I0213 19:42:56.932222 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae" Feb 13 19:42:56.932982 containerd[1467]: time="2025-02-13T19:42:56.932838853Z" level=info msg="StopPodSandbox for \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\"" Feb 13 19:42:56.933340 containerd[1467]: time="2025-02-13T19:42:56.933078764Z" level=info msg="Ensure that sandbox 2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae in task-service has been cleanup successfully" Feb 13 19:42:56.934146 containerd[1467]: time="2025-02-13T19:42:56.934076306Z" level=info msg="TearDown network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\" successfully" Feb 13 19:42:56.934146 containerd[1467]: time="2025-02-13T19:42:56.934099049Z" level=info msg="StopPodSandbox for \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\" returns successfully" Feb 13 19:42:56.934609 containerd[1467]: time="2025-02-13T19:42:56.934568239Z" level=info msg="StopPodSandbox for \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\"" Feb 13 19:42:56.934694 containerd[1467]: time="2025-02-13T19:42:56.934655624Z" level=info msg="TearDown network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" successfully" Feb 13 19:42:56.934694 containerd[1467]: time="2025-02-13T19:42:56.934672826Z" level=info msg="StopPodSandbox for \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" returns successfully" Feb 13 19:42:56.936044 containerd[1467]: time="2025-02-13T19:42:56.935999675Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\"" Feb 13 19:42:56.936180 containerd[1467]: time="2025-02-13T19:42:56.936142443Z" level=info msg="TearDown network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" successfully" Feb 13 19:42:56.936180 containerd[1467]: time="2025-02-13T19:42:56.936156369Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" returns successfully" Feb 13 19:42:56.936123 systemd[1]: run-netns-cni\x2d3f97a52c\x2d4bc9\x2d0691\x2d302f\x2d95ce1f1602cb.mount: Deactivated successfully. Feb 13 19:42:56.936833 containerd[1467]: time="2025-02-13T19:42:56.936814033Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" Feb 13 19:42:56.937033 containerd[1467]: time="2025-02-13T19:42:56.936963524Z" level=info msg="TearDown network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" successfully" Feb 13 19:42:56.937033 containerd[1467]: time="2025-02-13T19:42:56.936978171Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" returns successfully" Feb 13 19:42:56.937260 containerd[1467]: time="2025-02-13T19:42:56.937230986Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:42:56.937371 containerd[1467]: time="2025-02-13T19:42:56.937339770Z" level=info msg="TearDown network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" successfully" Feb 13 19:42:56.937371 containerd[1467]: time="2025-02-13T19:42:56.937358586Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" returns successfully" Feb 13 19:42:56.937751 containerd[1467]: time="2025-02-13T19:42:56.937694906Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:42:56.937839 containerd[1467]: time="2025-02-13T19:42:56.937820291Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:42:56.937839 containerd[1467]: time="2025-02-13T19:42:56.937836181Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:42:56.938110 kubelet[2656]: E0213 19:42:56.938080 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:56.938378 containerd[1467]: time="2025-02-13T19:42:56.938355226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:6,}" Feb 13 19:42:56.938840 kubelet[2656]: I0213 19:42:56.938816 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d" Feb 13 19:42:56.939838 containerd[1467]: time="2025-02-13T19:42:56.939792844Z" level=info msg="StopPodSandbox for \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\"" Feb 13 19:42:56.940116 containerd[1467]: time="2025-02-13T19:42:56.940080643Z" level=info msg="Ensure that sandbox a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d in task-service has been cleanup successfully" Feb 13 19:42:56.942701 kubelet[2656]: E0213 19:42:56.942667 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:56.943458 containerd[1467]: time="2025-02-13T19:42:56.943280488Z" level=info msg="TearDown network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\" successfully" Feb 13 19:42:56.943458 containerd[1467]: time="2025-02-13T19:42:56.943305966Z" level=info msg="StopPodSandbox for \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\" returns successfully" Feb 13 19:42:56.943364 systemd[1]: run-netns-cni\x2dd2b44571\x2d7d21\x2d23ab\x2d3c87\x2d49333d58bc48.mount: Deactivated successfully. Feb 13 19:42:56.943857 containerd[1467]: time="2025-02-13T19:42:56.943803339Z" level=info msg="StopPodSandbox for \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\"" Feb 13 19:42:56.943946 containerd[1467]: time="2025-02-13T19:42:56.943927321Z" level=info msg="TearDown network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" successfully" Feb 13 19:42:56.943946 containerd[1467]: time="2025-02-13T19:42:56.943943962Z" level=info msg="StopPodSandbox for \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" returns successfully" Feb 13 19:42:56.944213 containerd[1467]: time="2025-02-13T19:42:56.944187510Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\"" Feb 13 19:42:56.944300 containerd[1467]: time="2025-02-13T19:42:56.944280234Z" level=info msg="TearDown network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" successfully" Feb 13 19:42:56.944300 containerd[1467]: time="2025-02-13T19:42:56.944298799Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" returns successfully" Feb 13 19:42:56.944585 containerd[1467]: time="2025-02-13T19:42:56.944558616Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" Feb 13 19:42:56.944727 containerd[1467]: time="2025-02-13T19:42:56.944672580Z" level=info msg="TearDown network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" successfully" Feb 13 19:42:56.944727 containerd[1467]: time="2025-02-13T19:42:56.944686837Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" returns successfully" Feb 13 19:42:56.945151 containerd[1467]: time="2025-02-13T19:42:56.944982232Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:42:56.945151 containerd[1467]: time="2025-02-13T19:42:56.945075877Z" level=info msg="TearDown network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" successfully" Feb 13 19:42:56.945151 containerd[1467]: time="2025-02-13T19:42:56.945092298Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" returns successfully" Feb 13 19:42:56.945620 containerd[1467]: time="2025-02-13T19:42:56.945435151Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:42:56.945620 containerd[1467]: time="2025-02-13T19:42:56.945552040Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:42:56.945620 containerd[1467]: time="2025-02-13T19:42:56.945565666Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:42:56.946007 containerd[1467]: time="2025-02-13T19:42:56.945979513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:6,}" Feb 13 19:42:56.948740 kubelet[2656]: I0213 19:42:56.948699 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753" Feb 13 19:42:56.949337 containerd[1467]: time="2025-02-13T19:42:56.949306396Z" level=info msg="StopPodSandbox for \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\"" Feb 13 19:42:56.949616 containerd[1467]: time="2025-02-13T19:42:56.949580931Z" level=info msg="Ensure that sandbox b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753 in task-service has been cleanup successfully" Feb 13 19:42:56.950074 containerd[1467]: time="2025-02-13T19:42:56.950048278Z" level=info msg="TearDown network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\" successfully" Feb 13 19:42:56.950074 containerd[1467]: time="2025-02-13T19:42:56.950067625Z" level=info msg="StopPodSandbox for \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\" returns successfully" Feb 13 19:42:56.950525 containerd[1467]: time="2025-02-13T19:42:56.950352809Z" level=info msg="StopPodSandbox for \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\"" Feb 13 19:42:56.950525 containerd[1467]: time="2025-02-13T19:42:56.950433230Z" level=info msg="TearDown network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" successfully" Feb 13 19:42:56.950525 containerd[1467]: time="2025-02-13T19:42:56.950442858Z" level=info msg="StopPodSandbox for \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" returns successfully" Feb 13 19:42:56.951030 containerd[1467]: time="2025-02-13T19:42:56.950812051Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\"" Feb 13 19:42:56.951030 containerd[1467]: time="2025-02-13T19:42:56.950902591Z" level=info msg="TearDown network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" successfully" Feb 13 19:42:56.951030 containerd[1467]: time="2025-02-13T19:42:56.950911779Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" returns successfully" Feb 13 19:42:56.951407 containerd[1467]: time="2025-02-13T19:42:56.951358647Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" Feb 13 19:42:56.951520 containerd[1467]: time="2025-02-13T19:42:56.951477900Z" level=info msg="TearDown network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" successfully" Feb 13 19:42:56.951520 containerd[1467]: time="2025-02-13T19:42:56.951494852Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" returns successfully" Feb 13 19:42:56.951825 containerd[1467]: time="2025-02-13T19:42:56.951797150Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:42:56.952185 containerd[1467]: time="2025-02-13T19:42:56.952114355Z" level=info msg="TearDown network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" successfully" Feb 13 19:42:56.952185 containerd[1467]: time="2025-02-13T19:42:56.952131146Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" returns successfully" Feb 13 19:42:56.952916 containerd[1467]: time="2025-02-13T19:42:56.952743205Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:42:56.952916 containerd[1467]: time="2025-02-13T19:42:56.952836590Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:42:56.952916 containerd[1467]: time="2025-02-13T19:42:56.952859042Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:42:56.953387 systemd[1]: run-netns-cni\x2d2853938c\x2d1676\x2d0a15\x2d8e9a\x2d72a4d8257f86.mount: Deactivated successfully. Feb 13 19:42:56.954098 containerd[1467]: time="2025-02-13T19:42:56.953761346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:42:56.955006 kubelet[2656]: I0213 19:42:56.954962 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606" Feb 13 19:42:56.956152 containerd[1467]: time="2025-02-13T19:42:56.956114201Z" level=info msg="StopPodSandbox for \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\"" Feb 13 19:42:56.956594 containerd[1467]: time="2025-02-13T19:42:56.956443138Z" level=info msg="Ensure that sandbox c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606 in task-service has been cleanup successfully" Feb 13 19:42:56.959337 containerd[1467]: time="2025-02-13T19:42:56.959237191Z" level=info msg="TearDown network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\" successfully" Feb 13 19:42:56.959337 containerd[1467]: time="2025-02-13T19:42:56.959258511Z" level=info msg="StopPodSandbox for \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\" returns successfully" Feb 13 19:42:56.960237 containerd[1467]: time="2025-02-13T19:42:56.960090372Z" level=info msg="StopPodSandbox for \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\"" Feb 13 19:42:56.960237 containerd[1467]: time="2025-02-13T19:42:56.960181463Z" level=info msg="TearDown network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" successfully" Feb 13 19:42:56.960237 containerd[1467]: time="2025-02-13T19:42:56.960194898Z" level=info msg="StopPodSandbox for \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" returns successfully" Feb 13 19:42:56.960733 systemd[1]: run-netns-cni\x2d5848482a\x2d8c8e\x2d34bb\x2d9d89\x2db2fe61cd657f.mount: Deactivated successfully. Feb 13 19:42:56.961866 containerd[1467]: time="2025-02-13T19:42:56.961289593Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\"" Feb 13 19:42:56.961866 containerd[1467]: time="2025-02-13T19:42:56.961453680Z" level=info msg="TearDown network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" successfully" Feb 13 19:42:56.961866 containerd[1467]: time="2025-02-13T19:42:56.961466174Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" returns successfully" Feb 13 19:42:56.962119 containerd[1467]: time="2025-02-13T19:42:56.962074716Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" Feb 13 19:42:56.962189 containerd[1467]: time="2025-02-13T19:42:56.962169814Z" level=info msg="TearDown network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" successfully" Feb 13 19:42:56.962189 containerd[1467]: time="2025-02-13T19:42:56.962181446Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" returns successfully" Feb 13 19:42:56.964690 containerd[1467]: time="2025-02-13T19:42:56.964359252Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:42:56.964690 containerd[1467]: time="2025-02-13T19:42:56.964448680Z" level=info msg="TearDown network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" successfully" Feb 13 19:42:56.964690 containerd[1467]: time="2025-02-13T19:42:56.964459180Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" returns successfully" Feb 13 19:42:56.965145 containerd[1467]: time="2025-02-13T19:42:56.965127053Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:42:56.965543 containerd[1467]: time="2025-02-13T19:42:56.965278818Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:42:56.965543 containerd[1467]: time="2025-02-13T19:42:56.965292243Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:42:56.965617 kubelet[2656]: I0213 19:42:56.965554 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3" Feb 13 19:42:56.965757 containerd[1467]: time="2025-02-13T19:42:56.965740445Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:42:56.965921 containerd[1467]: time="2025-02-13T19:42:56.965906266Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:42:56.965985 containerd[1467]: time="2025-02-13T19:42:56.965965386Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:42:56.966668 containerd[1467]: time="2025-02-13T19:42:56.966182814Z" level=info msg="StopPodSandbox for \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\"" Feb 13 19:42:56.966668 containerd[1467]: time="2025-02-13T19:42:56.966475304Z" level=info msg="Ensure that sandbox eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3 in task-service has been cleanup successfully" Feb 13 19:42:56.966839 containerd[1467]: time="2025-02-13T19:42:56.966803159Z" level=info msg="TearDown network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\" successfully" Feb 13 19:42:56.966839 containerd[1467]: time="2025-02-13T19:42:56.966820291Z" level=info msg="StopPodSandbox for \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\" returns successfully" Feb 13 19:42:56.967661 containerd[1467]: time="2025-02-13T19:42:56.967612267Z" level=info msg="StopPodSandbox for \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\"" Feb 13 19:42:56.967737 containerd[1467]: time="2025-02-13T19:42:56.967714939Z" level=info msg="TearDown network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" successfully" Feb 13 19:42:56.967807 containerd[1467]: time="2025-02-13T19:42:56.967735598Z" level=info msg="StopPodSandbox for \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" returns successfully" Feb 13 19:42:56.967876 containerd[1467]: time="2025-02-13T19:42:56.967841126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:7,}" Feb 13 19:42:56.968793 containerd[1467]: time="2025-02-13T19:42:56.968642610Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\"" Feb 13 19:42:56.968793 containerd[1467]: time="2025-02-13T19:42:56.968740173Z" level=info msg="TearDown network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" successfully" Feb 13 19:42:56.968793 containerd[1467]: time="2025-02-13T19:42:56.968752546Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" returns successfully" Feb 13 19:42:56.969223 containerd[1467]: time="2025-02-13T19:42:56.969197181Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" Feb 13 19:42:56.970107 containerd[1467]: time="2025-02-13T19:42:56.970010277Z" level=info msg="TearDown network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" successfully" Feb 13 19:42:56.970107 containerd[1467]: time="2025-02-13T19:42:56.970035334Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" returns successfully" Feb 13 19:42:56.970972 containerd[1467]: time="2025-02-13T19:42:56.970805349Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:42:56.970972 containerd[1467]: time="2025-02-13T19:42:56.970926847Z" level=info msg="TearDown network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" successfully" Feb 13 19:42:56.970972 containerd[1467]: time="2025-02-13T19:42:56.970941164Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" returns successfully" Feb 13 19:42:56.972000 containerd[1467]: time="2025-02-13T19:42:56.971872682Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:42:56.972711 kubelet[2656]: I0213 19:42:56.972315 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wj95g" podStartSLOduration=1.351513446 podStartE2EDuration="23.972288623s" podCreationTimestamp="2025-02-13 19:42:33 +0000 UTC" firstStartedPulling="2025-02-13 19:42:33.497949712 +0000 UTC m=+19.870219265" lastFinishedPulling="2025-02-13 19:42:56.118724889 +0000 UTC m=+42.490994442" observedRunningTime="2025-02-13 19:42:56.969796416 +0000 UTC m=+43.342065979" watchObservedRunningTime="2025-02-13 19:42:56.972288623 +0000 UTC m=+43.344558176" Feb 13 19:42:56.972865 containerd[1467]: time="2025-02-13T19:42:56.972468490Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:42:56.973345 kubelet[2656]: I0213 19:42:56.973303 2656 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e" Feb 13 19:42:56.973742 containerd[1467]: time="2025-02-13T19:42:56.972486734Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:42:56.973908 containerd[1467]: time="2025-02-13T19:42:56.973875900Z" level=info msg="StopPodSandbox for \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\"" Feb 13 19:42:56.974201 containerd[1467]: time="2025-02-13T19:42:56.974180372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:6,}" Feb 13 19:42:56.974378 containerd[1467]: time="2025-02-13T19:42:56.974336856Z" level=info msg="Ensure that sandbox ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e in task-service has been cleanup successfully" Feb 13 19:42:56.974941 containerd[1467]: time="2025-02-13T19:42:56.974880236Z" level=info msg="TearDown network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\" successfully" Feb 13 19:42:56.974941 containerd[1467]: time="2025-02-13T19:42:56.974907607Z" level=info msg="StopPodSandbox for \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\" returns successfully" Feb 13 19:42:56.975242 containerd[1467]: time="2025-02-13T19:42:56.975219592Z" level=info msg="StopPodSandbox for \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\"" Feb 13 19:42:56.975336 containerd[1467]: time="2025-02-13T19:42:56.975318909Z" level=info msg="TearDown network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" successfully" Feb 13 19:42:56.975368 containerd[1467]: time="2025-02-13T19:42:56.975335430Z" level=info msg="StopPodSandbox for \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" returns successfully" Feb 13 19:42:56.975660 containerd[1467]: time="2025-02-13T19:42:56.975639351Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\"" Feb 13 19:42:56.975887 containerd[1467]: time="2025-02-13T19:42:56.975731734Z" level=info msg="TearDown network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" successfully" Feb 13 19:42:56.975887 containerd[1467]: time="2025-02-13T19:42:56.975742083Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" returns successfully" Feb 13 19:42:56.976055 containerd[1467]: time="2025-02-13T19:42:56.976029482Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" Feb 13 19:42:56.976168 containerd[1467]: time="2025-02-13T19:42:56.976149668Z" level=info msg="TearDown network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" successfully" Feb 13 19:42:56.976210 containerd[1467]: time="2025-02-13T19:42:56.976166790Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" returns successfully" Feb 13 19:42:56.976977 containerd[1467]: time="2025-02-13T19:42:56.976909343Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:42:56.977021 containerd[1467]: time="2025-02-13T19:42:56.977001596Z" level=info msg="TearDown network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" successfully" Feb 13 19:42:56.977021 containerd[1467]: time="2025-02-13T19:42:56.977011475Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" returns successfully" Feb 13 19:42:56.977358 containerd[1467]: time="2025-02-13T19:42:56.977334130Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:42:56.977455 containerd[1467]: time="2025-02-13T19:42:56.977433918Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:42:56.977534 containerd[1467]: time="2025-02-13T19:42:56.977453344Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:42:56.978186 kubelet[2656]: E0213 19:42:56.977691 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:56.978467 containerd[1467]: time="2025-02-13T19:42:56.978447921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:6,}" Feb 13 19:42:57.356857 systemd-networkd[1396]: cali556a18d1e22: Link UP Feb 13 19:42:57.357074 systemd-networkd[1396]: cali556a18d1e22: Gained carrier Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.080 [INFO][5039] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.099 [INFO][5039] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0 calico-apiserver-7f9ffdc98- calico-apiserver 72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff 811 0 2025-02-13 19:42:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f9ffdc98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f9ffdc98-tn7zc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali556a18d1e22 [] []}} ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.100 [INFO][5039] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.170 [INFO][5078] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" HandleID="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Workload="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.183 [INFO][5078] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" HandleID="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Workload="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000375710), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f9ffdc98-tn7zc", "timestamp":"2025-02-13 19:42:57.170659808 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.183 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.183 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.183 [INFO][5078] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.186 [INFO][5078] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.195 [INFO][5078] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.199 [INFO][5078] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.201 [INFO][5078] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.202 [INFO][5078] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.203 [INFO][5078] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.204 [INFO][5078] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641 Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.318 [INFO][5078] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.333 [INFO][5078] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.333 [INFO][5078] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" host="localhost" Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.333 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:42:57.431926 containerd[1467]: 2025-02-13 19:42:57.333 [INFO][5078] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" HandleID="k8s-pod-network.3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Workload="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" Feb 13 19:42:57.432939 containerd[1467]: 2025-02-13 19:42:57.340 [INFO][5039] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0", GenerateName:"calico-apiserver-7f9ffdc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ffdc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f9ffdc98-tn7zc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali556a18d1e22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.432939 containerd[1467]: 2025-02-13 19:42:57.341 [INFO][5039] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" Feb 13 19:42:57.432939 containerd[1467]: 2025-02-13 19:42:57.341 [INFO][5039] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali556a18d1e22 ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" Feb 13 19:42:57.432939 containerd[1467]: 2025-02-13 19:42:57.357 [INFO][5039] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" Feb 13 19:42:57.432939 containerd[1467]: 2025-02-13 19:42:57.359 [INFO][5039] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0", GenerateName:"calico-apiserver-7f9ffdc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ffdc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641", Pod:"calico-apiserver-7f9ffdc98-tn7zc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali556a18d1e22", MAC:"06:40:8e:a5:8d:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.432939 containerd[1467]: 2025-02-13 19:42:57.428 [INFO][5039] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-tn7zc" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--tn7zc-eth0" Feb 13 19:42:57.444077 systemd-networkd[1396]: cali1e649c0561a: Link UP Feb 13 19:42:57.445047 systemd-networkd[1396]: cali1e649c0561a: Gained carrier Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.059 [INFO][4996] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.095 [INFO][4996] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0 coredns-7db6d8ff4d- kube-system 9029bfbd-404a-4b40-be12-a20e64469d44 812 0 2025-02-13 19:42:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-rkdrq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e649c0561a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.095 [INFO][4996] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.170 [INFO][5076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" HandleID="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Workload="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.185 [INFO][5076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" HandleID="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Workload="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e17a0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-rkdrq", "timestamp":"2025-02-13 19:42:57.170824017 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.185 [INFO][5076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.334 [INFO][5076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.334 [INFO][5076] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.336 [INFO][5076] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.341 [INFO][5076] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.346 [INFO][5076] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.349 [INFO][5076] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.352 [INFO][5076] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.352 [INFO][5076] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.355 [INFO][5076] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2 Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.428 [INFO][5076] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.434 [INFO][5076] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.434 [INFO][5076] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" host="localhost" Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.434 [INFO][5076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:42:57.456971 containerd[1467]: 2025-02-13 19:42:57.434 [INFO][5076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" HandleID="k8s-pod-network.fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Workload="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" Feb 13 19:42:57.457566 containerd[1467]: 2025-02-13 19:42:57.439 [INFO][4996] cni-plugin/k8s.go 386: Populated endpoint ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9029bfbd-404a-4b40-be12-a20e64469d44", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-rkdrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e649c0561a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.457566 containerd[1467]: 2025-02-13 19:42:57.439 [INFO][4996] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" Feb 13 19:42:57.457566 containerd[1467]: 2025-02-13 19:42:57.439 [INFO][4996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e649c0561a ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" Feb 13 19:42:57.457566 containerd[1467]: 2025-02-13 19:42:57.443 [INFO][4996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" Feb 13 19:42:57.457566 containerd[1467]: 2025-02-13 19:42:57.443 [INFO][4996] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9029bfbd-404a-4b40-be12-a20e64469d44", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2", Pod:"coredns-7db6d8ff4d-rkdrq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e649c0561a", MAC:"e6:9e:86:1d:ff:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.457566 containerd[1467]: 2025-02-13 19:42:57.452 [INFO][4996] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rkdrq" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rkdrq-eth0" Feb 13 19:42:57.492549 systemd-networkd[1396]: califd2103b5eda: Link UP Feb 13 19:42:57.493386 systemd-networkd[1396]: califd2103b5eda: Gained carrier Feb 13 19:42:57.623252 systemd-networkd[1396]: caliaa4419c3920: Link UP Feb 13 19:42:57.624461 systemd-networkd[1396]: caliaa4419c3920: Gained carrier Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.116 [INFO][5066] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.156 [INFO][5066] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0 calico-kube-controllers-78bc8bfdb7- calico-system 98d92e3a-28ae-4220-b729-b97a4af8635e 810 0 2025-02-13 19:42:33 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78bc8bfdb7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78bc8bfdb7-xfh2s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa4419c3920 [] []}} ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.156 [INFO][5066] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.203 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" HandleID="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Workload="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.333 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" HandleID="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Workload="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00013b7d0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78bc8bfdb7-xfh2s", "timestamp":"2025-02-13 19:42:57.202988582 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.333 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.488 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.488 [INFO][5098] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.489 [INFO][5098] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.493 [INFO][5098] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.513 [INFO][5098] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.515 [INFO][5098] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.516 [INFO][5098] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.516 [INFO][5098] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.518 [INFO][5098] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3 Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.598 [INFO][5098] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.618 [INFO][5098] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.618 [INFO][5098] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" host="localhost" Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.618 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:42:57.684398 containerd[1467]: 2025-02-13 19:42:57.618 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" HandleID="k8s-pod-network.368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Workload="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" Feb 13 19:42:57.685013 containerd[1467]: 2025-02-13 19:42:57.621 [INFO][5066] cni-plugin/k8s.go 386: Populated endpoint ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0", GenerateName:"calico-kube-controllers-78bc8bfdb7-", Namespace:"calico-system", SelfLink:"", UID:"98d92e3a-28ae-4220-b729-b97a4af8635e", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78bc8bfdb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78bc8bfdb7-xfh2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa4419c3920", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.685013 containerd[1467]: 2025-02-13 19:42:57.621 [INFO][5066] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" Feb 13 19:42:57.685013 containerd[1467]: 2025-02-13 19:42:57.621 [INFO][5066] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa4419c3920 ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" Feb 13 19:42:57.685013 containerd[1467]: 2025-02-13 19:42:57.623 [INFO][5066] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" Feb 13 19:42:57.685013 containerd[1467]: 2025-02-13 19:42:57.624 [INFO][5066] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0", GenerateName:"calico-kube-controllers-78bc8bfdb7-", Namespace:"calico-system", SelfLink:"", UID:"98d92e3a-28ae-4220-b729-b97a4af8635e", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78bc8bfdb7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3", Pod:"calico-kube-controllers-78bc8bfdb7-xfh2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa4419c3920", MAC:"8a:6d:11:53:13:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.685013 containerd[1467]: 2025-02-13 19:42:57.681 [INFO][5066] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3" Namespace="calico-system" Pod="calico-kube-controllers-78bc8bfdb7-xfh2s" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78bc8bfdb7--xfh2s-eth0" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.062 [INFO][5022] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.089 [INFO][5022] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--qxpzd-eth0 csi-node-driver- calico-system b54cda8b-5691-4984-90d2-94b24a2518d5 657 0 2025-02-13 19:42:33 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-qxpzd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califd2103b5eda [] []}} ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.089 [INFO][5022] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-eth0" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.171 [INFO][5077] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" HandleID="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Workload="localhost-k8s-csi--node--driver--qxpzd-eth0" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.186 [INFO][5077] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" HandleID="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Workload="localhost-k8s-csi--node--driver--qxpzd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011c080), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-qxpzd", "timestamp":"2025-02-13 19:42:57.171269442 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.186 [INFO][5077] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.434 [INFO][5077] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.434 [INFO][5077] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.437 [INFO][5077] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.442 [INFO][5077] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.447 [INFO][5077] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.453 [INFO][5077] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.458 [INFO][5077] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.458 [INFO][5077] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.459 [INFO][5077] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85 Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.463 [INFO][5077] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.487 [INFO][5077] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.488 [INFO][5077] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" host="localhost" Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.488 [INFO][5077] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:42:57.702570 containerd[1467]: 2025-02-13 19:42:57.488 [INFO][5077] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" HandleID="k8s-pod-network.16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Workload="localhost-k8s-csi--node--driver--qxpzd-eth0" Feb 13 19:42:57.705541 containerd[1467]: 2025-02-13 19:42:57.490 [INFO][5022] cni-plugin/k8s.go 386: Populated endpoint ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qxpzd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b54cda8b-5691-4984-90d2-94b24a2518d5", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-qxpzd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd2103b5eda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.705541 containerd[1467]: 2025-02-13 19:42:57.490 [INFO][5022] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-eth0" Feb 13 19:42:57.705541 containerd[1467]: 2025-02-13 19:42:57.490 [INFO][5022] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califd2103b5eda ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-eth0" Feb 13 19:42:57.705541 containerd[1467]: 2025-02-13 19:42:57.493 [INFO][5022] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-eth0" Feb 13 19:42:57.705541 containerd[1467]: 2025-02-13 19:42:57.493 [INFO][5022] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--qxpzd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b54cda8b-5691-4984-90d2-94b24a2518d5", ResourceVersion:"657", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85", Pod:"csi-node-driver-qxpzd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califd2103b5eda", MAC:"2e:71:04:a8:71:ce", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.705541 containerd[1467]: 2025-02-13 19:42:57.693 [INFO][5022] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85" Namespace="calico-system" Pod="csi-node-driver-qxpzd" WorkloadEndpoint="localhost-k8s-csi--node--driver--qxpzd-eth0" Feb 13 19:42:57.722710 containerd[1467]: time="2025-02-13T19:42:57.722600669Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:57.722853 containerd[1467]: time="2025-02-13T19:42:57.722681030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:57.722853 containerd[1467]: time="2025-02-13T19:42:57.722693984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.722929 containerd[1467]: time="2025-02-13T19:42:57.722902235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.731544 containerd[1467]: time="2025-02-13T19:42:57.731319911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:57.731544 containerd[1467]: time="2025-02-13T19:42:57.731372399Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:57.731544 containerd[1467]: time="2025-02-13T19:42:57.731385474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.732454 containerd[1467]: time="2025-02-13T19:42:57.732358750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.743454 systemd-networkd[1396]: calia4a1267b834: Link UP Feb 13 19:42:57.743759 systemd-networkd[1396]: calia4a1267b834: Gained carrier Feb 13 19:42:57.750161 containerd[1467]: time="2025-02-13T19:42:57.749878635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:57.750161 containerd[1467]: time="2025-02-13T19:42:57.750022214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:57.750161 containerd[1467]: time="2025-02-13T19:42:57.750037603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.751187 containerd[1467]: time="2025-02-13T19:42:57.751071503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.756755 systemd[1]: Started cri-containerd-3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641.scope - libcontainer container 3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641. Feb 13 19:42:57.760518 containerd[1467]: time="2025-02-13T19:42:57.760393465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:57.760860 containerd[1467]: time="2025-02-13T19:42:57.760715079Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:57.760860 containerd[1467]: time="2025-02-13T19:42:57.760755625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.765016 containerd[1467]: time="2025-02-13T19:42:57.764721897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.765369 systemd[1]: Started cri-containerd-fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2.scope - libcontainer container fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2. Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.250 [INFO][5124] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.321 [INFO][5124] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--44gch-eth0 coredns-7db6d8ff4d- kube-system abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec 804 0 2025-02-13 19:42:27 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-44gch eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia4a1267b834 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.321 [INFO][5124] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.372 [INFO][5143] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" HandleID="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Workload="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.434 [INFO][5143] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" HandleID="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Workload="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001335e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-44gch", "timestamp":"2025-02-13 19:42:57.372459947 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.435 [INFO][5143] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.618 [INFO][5143] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.618 [INFO][5143] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.620 [INFO][5143] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.625 [INFO][5143] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.629 [INFO][5143] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.682 [INFO][5143] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.692 [INFO][5143] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.692 [INFO][5143] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.697 [INFO][5143] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77 Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.704 [INFO][5143] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.715 [INFO][5143] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.715 [INFO][5143] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" host="localhost" Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.716 [INFO][5143] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:42:57.766024 containerd[1467]: 2025-02-13 19:42:57.716 [INFO][5143] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" HandleID="k8s-pod-network.7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Workload="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" Feb 13 19:42:57.767041 containerd[1467]: 2025-02-13 19:42:57.735 [INFO][5124] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--44gch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-44gch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4a1267b834", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.767041 containerd[1467]: 2025-02-13 19:42:57.736 [INFO][5124] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" Feb 13 19:42:57.767041 containerd[1467]: 2025-02-13 19:42:57.736 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4a1267b834 ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" Feb 13 19:42:57.767041 containerd[1467]: 2025-02-13 19:42:57.744 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" Feb 13 19:42:57.767041 containerd[1467]: 2025-02-13 19:42:57.746 [INFO][5124] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--44gch-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77", Pod:"coredns-7db6d8ff4d-44gch", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4a1267b834", MAC:"3e:f0:86:46:18:65", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.767041 containerd[1467]: 2025-02-13 19:42:57.759 [INFO][5124] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77" Namespace="kube-system" Pod="coredns-7db6d8ff4d-44gch" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--44gch-eth0" Feb 13 19:42:57.785685 systemd[1]: Started cri-containerd-16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85.scope - libcontainer container 16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85. Feb 13 19:42:57.790940 systemd[1]: Started cri-containerd-368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3.scope - libcontainer container 368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3. Feb 13 19:42:57.796172 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:57.799880 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:57.801591 systemd-networkd[1396]: cali771a7580fa6: Link UP Feb 13 19:42:57.802202 systemd-networkd[1396]: cali771a7580fa6: Gained carrier Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.204 [INFO][5103] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.321 [INFO][5103] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0 calico-apiserver-7f9ffdc98- calico-apiserver 61a215c3-d6e4-40f4-9806-914857a2ab1f 807 0 2025-02-13 19:42:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7f9ffdc98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7f9ffdc98-9gqpq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali771a7580fa6 [] []}} ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.321 [INFO][5103] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.362 [INFO][5139] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" HandleID="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Workload="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.435 [INFO][5139] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" HandleID="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Workload="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7f9ffdc98-9gqpq", "timestamp":"2025-02-13 19:42:57.362112981 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.435 [INFO][5139] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.716 [INFO][5139] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.716 [INFO][5139] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.733 [INFO][5139] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.742 [INFO][5139] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.757 [INFO][5139] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.759 [INFO][5139] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.768 [INFO][5139] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.768 [INFO][5139] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.774 [INFO][5139] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.781 [INFO][5139] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.788 [INFO][5139] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.788 [INFO][5139] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" host="localhost" Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.788 [INFO][5139] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:42:57.819489 containerd[1467]: 2025-02-13 19:42:57.788 [INFO][5139] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" HandleID="k8s-pod-network.920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Workload="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" Feb 13 19:42:57.820029 containerd[1467]: 2025-02-13 19:42:57.795 [INFO][5103] cni-plugin/k8s.go 386: Populated endpoint ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0", GenerateName:"calico-apiserver-7f9ffdc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"61a215c3-d6e4-40f4-9806-914857a2ab1f", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ffdc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7f9ffdc98-9gqpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali771a7580fa6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.820029 containerd[1467]: 2025-02-13 19:42:57.796 [INFO][5103] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" Feb 13 19:42:57.820029 containerd[1467]: 2025-02-13 19:42:57.796 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali771a7580fa6 ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" Feb 13 19:42:57.820029 containerd[1467]: 2025-02-13 19:42:57.802 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" Feb 13 19:42:57.820029 containerd[1467]: 2025-02-13 19:42:57.803 [INFO][5103] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0", GenerateName:"calico-apiserver-7f9ffdc98-", Namespace:"calico-apiserver", SelfLink:"", UID:"61a215c3-d6e4-40f4-9806-914857a2ab1f", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 42, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7f9ffdc98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c", Pod:"calico-apiserver-7f9ffdc98-9gqpq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali771a7580fa6", MAC:"9a:4f:ec:0c:54:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:42:57.820029 containerd[1467]: 2025-02-13 19:42:57.813 [INFO][5103] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c" Namespace="calico-apiserver" Pod="calico-apiserver-7f9ffdc98-9gqpq" WorkloadEndpoint="localhost-k8s-calico--apiserver--7f9ffdc98--9gqpq-eth0" Feb 13 19:42:57.823893 containerd[1467]: time="2025-02-13T19:42:57.823719474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:57.824014 containerd[1467]: time="2025-02-13T19:42:57.823895464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:57.824014 containerd[1467]: time="2025-02-13T19:42:57.823912917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.825022 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:57.825365 containerd[1467]: time="2025-02-13T19:42:57.825282757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.827342 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:57.843932 containerd[1467]: time="2025-02-13T19:42:57.843878802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rkdrq,Uid:9029bfbd-404a-4b40-be12-a20e64469d44,Namespace:kube-system,Attempt:6,} returns sandbox id \"fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2\"" Feb 13 19:42:57.846967 kubelet[2656]: E0213 19:42:57.845865 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:57.853528 containerd[1467]: time="2025-02-13T19:42:57.853482914Z" level=info msg="CreateContainer within sandbox \"fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:42:57.860623 containerd[1467]: time="2025-02-13T19:42:57.860590620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qxpzd,Uid:b54cda8b-5691-4984-90d2-94b24a2518d5,Namespace:calico-system,Attempt:6,} returns sandbox id \"16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85\"" Feb 13 19:42:57.865935 systemd[1]: Started cri-containerd-7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77.scope - libcontainer container 7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77. Feb 13 19:42:57.869972 containerd[1467]: time="2025-02-13T19:42:57.869688031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:42:57.870249 containerd[1467]: time="2025-02-13T19:42:57.870203719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-tn7zc,Uid:72c9c3b8-d2cf-4ad0-a722-60cf8109d8ff,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641\"" Feb 13 19:42:57.887693 containerd[1467]: time="2025-02-13T19:42:57.887562011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78bc8bfdb7-xfh2s,Uid:98d92e3a-28ae-4220-b729-b97a4af8635e,Namespace:calico-system,Attempt:7,} returns sandbox id \"368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3\"" Feb 13 19:42:57.891717 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:57.904275 containerd[1467]: time="2025-02-13T19:42:57.904133727Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:42:57.904394 containerd[1467]: time="2025-02-13T19:42:57.904288227Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:42:57.904394 containerd[1467]: time="2025-02-13T19:42:57.904324805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.904470 containerd[1467]: time="2025-02-13T19:42:57.904445973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:42:57.940970 containerd[1467]: time="2025-02-13T19:42:57.940918332Z" level=info msg="CreateContainer within sandbox \"fcdbfea6cc4f847729d841aaee67ef94c10b32baf3fad4ba71ff28789611d8d2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a74a2875224369f83dfa048ce23c5ee4873880195b9359134f5ae6005985272b\"" Feb 13 19:42:57.945535 containerd[1467]: time="2025-02-13T19:42:57.942460365Z" level=info msg="StartContainer for \"a74a2875224369f83dfa048ce23c5ee4873880195b9359134f5ae6005985272b\"" Feb 13 19:42:57.947057 systemd[1]: run-netns-cni\x2d973ad0ba\x2dd9b1\x2d0adb\x2d0fc9\x2dff14016d2d78.mount: Deactivated successfully. Feb 13 19:42:57.947204 systemd[1]: run-netns-cni\x2d9aabb41c\x2d3ab7\x2dd7d5\x2dc26e\x2d262f3ae4caeb.mount: Deactivated successfully. Feb 13 19:42:57.954464 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount293703342.mount: Deactivated successfully. Feb 13 19:42:57.971659 systemd[1]: Started cri-containerd-920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c.scope - libcontainer container 920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c. Feb 13 19:42:57.973866 containerd[1467]: time="2025-02-13T19:42:57.973819800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-44gch,Uid:abdf7ab8-742e-4339-a8d7-ab8bf1f9e1ec,Namespace:kube-system,Attempt:6,} returns sandbox id \"7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77\"" Feb 13 19:42:57.976297 kubelet[2656]: E0213 19:42:57.976277 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:57.983045 containerd[1467]: time="2025-02-13T19:42:57.980017529Z" level=info msg="CreateContainer within sandbox \"7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 13 19:42:58.026078 systemd[1]: run-containerd-runc-k8s.io-a74a2875224369f83dfa048ce23c5ee4873880195b9359134f5ae6005985272b-runc.zm5bZY.mount: Deactivated successfully. Feb 13 19:42:58.035788 systemd[1]: Started cri-containerd-a74a2875224369f83dfa048ce23c5ee4873880195b9359134f5ae6005985272b.scope - libcontainer container a74a2875224369f83dfa048ce23c5ee4873880195b9359134f5ae6005985272b. Feb 13 19:42:58.038996 kubelet[2656]: E0213 19:42:58.038419 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:58.047293 systemd-resolved[1330]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:42:58.050694 containerd[1467]: time="2025-02-13T19:42:58.050484734Z" level=info msg="CreateContainer within sandbox \"7562beb3098757689a5bcfea8f0e416fcf83a0e1b0e07a3ce5c459ddbd833c77\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e788b98af2d35762b660479a876a82c4a189799ff26e5ebb3f07018f8d253fc8\"" Feb 13 19:42:58.052612 containerd[1467]: time="2025-02-13T19:42:58.051629853Z" level=info msg="StartContainer for \"e788b98af2d35762b660479a876a82c4a189799ff26e5ebb3f07018f8d253fc8\"" Feb 13 19:42:58.119136 systemd[1]: Started cri-containerd-e788b98af2d35762b660479a876a82c4a189799ff26e5ebb3f07018f8d253fc8.scope - libcontainer container e788b98af2d35762b660479a876a82c4a189799ff26e5ebb3f07018f8d253fc8. Feb 13 19:42:58.123082 containerd[1467]: time="2025-02-13T19:42:58.123007816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7f9ffdc98-9gqpq,Uid:61a215c3-d6e4-40f4-9806-914857a2ab1f,Namespace:calico-apiserver,Attempt:6,} returns sandbox id \"920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c\"" Feb 13 19:42:58.171541 kernel: bpftool[5680]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:42:58.189984 containerd[1467]: time="2025-02-13T19:42:58.189931080Z" level=info msg="StartContainer for \"a74a2875224369f83dfa048ce23c5ee4873880195b9359134f5ae6005985272b\" returns successfully" Feb 13 19:42:58.190244 containerd[1467]: time="2025-02-13T19:42:58.190162263Z" level=info msg="StartContainer for \"e788b98af2d35762b660479a876a82c4a189799ff26e5ebb3f07018f8d253fc8\" returns successfully" Feb 13 19:42:58.444917 systemd-networkd[1396]: cali556a18d1e22: Gained IPv6LL Feb 13 19:42:58.452470 systemd-networkd[1396]: vxlan.calico: Link UP Feb 13 19:42:58.452769 systemd-networkd[1396]: vxlan.calico: Gained carrier Feb 13 19:42:58.812847 systemd[1]: Started sshd@12-10.0.0.106:22-10.0.0.1:54546.service - OpenSSH per-connection server daemon (10.0.0.1:54546). Feb 13 19:42:58.870836 sshd[5781]: Accepted publickey for core from 10.0.0.1 port 54546 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:58.873004 sshd-session[5781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:58.878001 systemd-logind[1449]: New session 13 of user core. Feb 13 19:42:58.883698 systemd[1]: Started session-13.scope - Session 13 of User core. Feb 13 19:42:59.012565 sshd[5785]: Connection closed by 10.0.0.1 port 54546 Feb 13 19:42:59.013148 sshd-session[5781]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:59.026370 systemd[1]: sshd@12-10.0.0.106:22-10.0.0.1:54546.service: Deactivated successfully. Feb 13 19:42:59.028102 systemd[1]: session-13.scope: Deactivated successfully. Feb 13 19:42:59.029795 systemd-logind[1449]: Session 13 logged out. Waiting for processes to exit. Feb 13 19:42:59.035926 systemd[1]: Started sshd@13-10.0.0.106:22-10.0.0.1:54552.service - OpenSSH per-connection server daemon (10.0.0.1:54552). Feb 13 19:42:59.037825 systemd-logind[1449]: Removed session 13. Feb 13 19:42:59.040491 kubelet[2656]: E0213 19:42:59.040456 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:59.044009 kubelet[2656]: E0213 19:42:59.043975 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:42:59.054361 kubelet[2656]: I0213 19:42:59.054292 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rkdrq" podStartSLOduration=32.05426642 podStartE2EDuration="32.05426642s" podCreationTimestamp="2025-02-13 19:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:59.051960404 +0000 UTC m=+45.424229957" watchObservedRunningTime="2025-02-13 19:42:59.05426642 +0000 UTC m=+45.426535973" Feb 13 19:42:59.062614 kubelet[2656]: I0213 19:42:59.062546 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-44gch" podStartSLOduration=32.062521529 podStartE2EDuration="32.062521529s" podCreationTimestamp="2025-02-13 19:42:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 19:42:59.061543835 +0000 UTC m=+45.433813388" watchObservedRunningTime="2025-02-13 19:42:59.062521529 +0000 UTC m=+45.434791082" Feb 13 19:42:59.083874 sshd[5800]: Accepted publickey for core from 10.0.0.1 port 54552 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:59.086219 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:59.091396 systemd-logind[1449]: New session 14 of user core. Feb 13 19:42:59.097632 systemd[1]: Started session-14.scope - Session 14 of User core. Feb 13 19:42:59.147642 systemd-networkd[1396]: cali1e649c0561a: Gained IPv6LL Feb 13 19:42:59.148140 systemd-networkd[1396]: califd2103b5eda: Gained IPv6LL Feb 13 19:42:59.263004 sshd[5806]: Connection closed by 10.0.0.1 port 54552 Feb 13 19:42:59.263349 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:59.273472 systemd[1]: sshd@13-10.0.0.106:22-10.0.0.1:54552.service: Deactivated successfully. Feb 13 19:42:59.277699 systemd[1]: session-14.scope: Deactivated successfully. Feb 13 19:42:59.281063 systemd-logind[1449]: Session 14 logged out. Waiting for processes to exit. Feb 13 19:42:59.291976 systemd[1]: Started sshd@14-10.0.0.106:22-10.0.0.1:54558.service - OpenSSH per-connection server daemon (10.0.0.1:54558). Feb 13 19:42:59.292919 systemd-logind[1449]: Removed session 14. Feb 13 19:42:59.331792 sshd[5819]: Accepted publickey for core from 10.0.0.1 port 54558 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:42:59.334089 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:42:59.339748 systemd-logind[1449]: New session 15 of user core. Feb 13 19:42:59.344773 systemd[1]: Started session-15.scope - Session 15 of User core. Feb 13 19:42:59.468749 systemd-networkd[1396]: caliaa4419c3920: Gained IPv6LL Feb 13 19:42:59.471428 sshd[5821]: Connection closed by 10.0.0.1 port 54558 Feb 13 19:42:59.471839 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Feb 13 19:42:59.475695 systemd[1]: sshd@14-10.0.0.106:22-10.0.0.1:54558.service: Deactivated successfully. Feb 13 19:42:59.477928 systemd[1]: session-15.scope: Deactivated successfully. Feb 13 19:42:59.478739 systemd-logind[1449]: Session 15 logged out. Waiting for processes to exit. Feb 13 19:42:59.479762 systemd-logind[1449]: Removed session 15. Feb 13 19:42:59.591080 containerd[1467]: time="2025-02-13T19:42:59.590946574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:59.592083 containerd[1467]: time="2025-02-13T19:42:59.592051136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:42:59.593435 containerd[1467]: time="2025-02-13T19:42:59.593375992Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:59.595453 containerd[1467]: time="2025-02-13T19:42:59.595415429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:42:59.597439 containerd[1467]: time="2025-02-13T19:42:59.597393711Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.727663681s" Feb 13 19:42:59.597439 containerd[1467]: time="2025-02-13T19:42:59.597432664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:42:59.598515 containerd[1467]: time="2025-02-13T19:42:59.598451976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:42:59.599654 containerd[1467]: time="2025-02-13T19:42:59.599533135Z" level=info msg="CreateContainer within sandbox \"16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:42:59.624569 containerd[1467]: time="2025-02-13T19:42:59.624521621Z" level=info msg="CreateContainer within sandbox \"16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2e3addb94f56fd76b20449312741bbdc373c608589257b2e785d3f3758267ca7\"" Feb 13 19:42:59.625132 containerd[1467]: time="2025-02-13T19:42:59.625095118Z" level=info msg="StartContainer for \"2e3addb94f56fd76b20449312741bbdc373c608589257b2e785d3f3758267ca7\"" Feb 13 19:42:59.661678 systemd[1]: Started cri-containerd-2e3addb94f56fd76b20449312741bbdc373c608589257b2e785d3f3758267ca7.scope - libcontainer container 2e3addb94f56fd76b20449312741bbdc373c608589257b2e785d3f3758267ca7. Feb 13 19:42:59.694903 containerd[1467]: time="2025-02-13T19:42:59.694862233Z" level=info msg="StartContainer for \"2e3addb94f56fd76b20449312741bbdc373c608589257b2e785d3f3758267ca7\" returns successfully" Feb 13 19:42:59.723711 systemd-networkd[1396]: calia4a1267b834: Gained IPv6LL Feb 13 19:42:59.724040 systemd-networkd[1396]: cali771a7580fa6: Gained IPv6LL Feb 13 19:42:59.915681 systemd-networkd[1396]: vxlan.calico: Gained IPv6LL Feb 13 19:43:00.061750 kubelet[2656]: E0213 19:43:00.061721 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:00.062286 kubelet[2656]: E0213 19:43:00.061766 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:01.065722 kubelet[2656]: E0213 19:43:01.064543 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:02.190676 containerd[1467]: time="2025-02-13T19:43:02.190618170Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:02.191681 containerd[1467]: time="2025-02-13T19:43:02.191593350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Feb 13 19:43:02.193126 containerd[1467]: time="2025-02-13T19:43:02.193091791Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:02.195417 containerd[1467]: time="2025-02-13T19:43:02.195384533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:02.195984 containerd[1467]: time="2025-02-13T19:43:02.195949632Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.597468892s" Feb 13 19:43:02.195984 containerd[1467]: time="2025-02-13T19:43:02.195976743Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:43:02.196910 containerd[1467]: time="2025-02-13T19:43:02.196881180Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Feb 13 19:43:02.198033 containerd[1467]: time="2025-02-13T19:43:02.197997314Z" level=info msg="CreateContainer within sandbox \"3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:43:02.213437 containerd[1467]: time="2025-02-13T19:43:02.213397277Z" level=info msg="CreateContainer within sandbox \"3ac1526f109e459d7e2629e28b39e5841aa9eb4e44128a66bb6a03dc579a8641\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c6ac86fdf2698b3627eccf67fabdd140b00bd70a0847dbbba18c164a72b9b85e\"" Feb 13 19:43:02.213990 containerd[1467]: time="2025-02-13T19:43:02.213949524Z" level=info msg="StartContainer for \"c6ac86fdf2698b3627eccf67fabdd140b00bd70a0847dbbba18c164a72b9b85e\"" Feb 13 19:43:02.241774 systemd[1]: Started cri-containerd-c6ac86fdf2698b3627eccf67fabdd140b00bd70a0847dbbba18c164a72b9b85e.scope - libcontainer container c6ac86fdf2698b3627eccf67fabdd140b00bd70a0847dbbba18c164a72b9b85e. Feb 13 19:43:02.283803 containerd[1467]: time="2025-02-13T19:43:02.283747905Z" level=info msg="StartContainer for \"c6ac86fdf2698b3627eccf67fabdd140b00bd70a0847dbbba18c164a72b9b85e\" returns successfully" Feb 13 19:43:03.083919 kubelet[2656]: I0213 19:43:03.083855 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f9ffdc98-tn7zc" podStartSLOduration=25.762661632 podStartE2EDuration="30.083835366s" podCreationTimestamp="2025-02-13 19:42:33 +0000 UTC" firstStartedPulling="2025-02-13 19:42:57.875529443 +0000 UTC m=+44.247798996" lastFinishedPulling="2025-02-13 19:43:02.196703187 +0000 UTC m=+48.568972730" observedRunningTime="2025-02-13 19:43:03.08340066 +0000 UTC m=+49.455670214" watchObservedRunningTime="2025-02-13 19:43:03.083835366 +0000 UTC m=+49.456104919" Feb 13 19:43:04.074940 kubelet[2656]: I0213 19:43:04.074905 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:43:04.487302 systemd[1]: Started sshd@15-10.0.0.106:22-10.0.0.1:49834.service - OpenSSH per-connection server daemon (10.0.0.1:49834). Feb 13 19:43:04.549044 sshd[5936]: Accepted publickey for core from 10.0.0.1 port 49834 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:04.551097 sshd-session[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:04.556029 systemd-logind[1449]: New session 16 of user core. Feb 13 19:43:04.562733 systemd[1]: Started session-16.scope - Session 16 of User core. Feb 13 19:43:04.815121 sshd[5939]: Connection closed by 10.0.0.1 port 49834 Feb 13 19:43:04.815486 sshd-session[5936]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:04.820765 systemd[1]: sshd@15-10.0.0.106:22-10.0.0.1:49834.service: Deactivated successfully. Feb 13 19:43:04.823105 systemd[1]: session-16.scope: Deactivated successfully. Feb 13 19:43:04.823929 systemd-logind[1449]: Session 16 logged out. Waiting for processes to exit. Feb 13 19:43:04.824901 systemd-logind[1449]: Removed session 16. Feb 13 19:43:05.321323 containerd[1467]: time="2025-02-13T19:43:05.321266770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:05.322196 containerd[1467]: time="2025-02-13T19:43:05.322110538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Feb 13 19:43:05.323456 containerd[1467]: time="2025-02-13T19:43:05.323416748Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:05.325659 containerd[1467]: time="2025-02-13T19:43:05.325625908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:05.326352 containerd[1467]: time="2025-02-13T19:43:05.326311458Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.129403147s" Feb 13 19:43:05.326352 containerd[1467]: time="2025-02-13T19:43:05.326343789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Feb 13 19:43:05.327430 containerd[1467]: time="2025-02-13T19:43:05.327364110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Feb 13 19:43:05.340172 containerd[1467]: time="2025-02-13T19:43:05.340123895Z" level=info msg="CreateContainer within sandbox \"368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 13 19:43:05.364334 containerd[1467]: time="2025-02-13T19:43:05.364283404Z" level=info msg="CreateContainer within sandbox \"368e99e8dfb814055edcc6e1ba139801743bc5f7ed0e457a2cff10d6ce1d71e3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"9381d76da2df47b7bcd42223398370f7c1dc16b73707b74de5a636df9d3b0280\"" Feb 13 19:43:05.364870 containerd[1467]: time="2025-02-13T19:43:05.364841545Z" level=info msg="StartContainer for \"9381d76da2df47b7bcd42223398370f7c1dc16b73707b74de5a636df9d3b0280\"" Feb 13 19:43:05.395672 systemd[1]: Started cri-containerd-9381d76da2df47b7bcd42223398370f7c1dc16b73707b74de5a636df9d3b0280.scope - libcontainer container 9381d76da2df47b7bcd42223398370f7c1dc16b73707b74de5a636df9d3b0280. Feb 13 19:43:05.437997 containerd[1467]: time="2025-02-13T19:43:05.437859578Z" level=info msg="StartContainer for \"9381d76da2df47b7bcd42223398370f7c1dc16b73707b74de5a636df9d3b0280\" returns successfully" Feb 13 19:43:05.787934 containerd[1467]: time="2025-02-13T19:43:05.787781232Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:05.788755 containerd[1467]: time="2025-02-13T19:43:05.788681316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Feb 13 19:43:05.791102 containerd[1467]: time="2025-02-13T19:43:05.791069443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 463.66694ms" Feb 13 19:43:05.791184 containerd[1467]: time="2025-02-13T19:43:05.791104920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Feb 13 19:43:05.791999 containerd[1467]: time="2025-02-13T19:43:05.791972644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:43:05.793268 containerd[1467]: time="2025-02-13T19:43:05.793142136Z" level=info msg="CreateContainer within sandbox \"920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 13 19:43:05.808007 containerd[1467]: time="2025-02-13T19:43:05.807923367Z" level=info msg="CreateContainer within sandbox \"920187caf73464c72f507ca2285e2d8c7ca3f6d5d3a7c41a346b094f7882c79c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e3d31aac0d434443f226b355f09cb4537858693140e7a4b71ceae1c721518ff1\"" Feb 13 19:43:05.808551 containerd[1467]: time="2025-02-13T19:43:05.808521033Z" level=info msg="StartContainer for \"e3d31aac0d434443f226b355f09cb4537858693140e7a4b71ceae1c721518ff1\"" Feb 13 19:43:05.837630 systemd[1]: Started cri-containerd-e3d31aac0d434443f226b355f09cb4537858693140e7a4b71ceae1c721518ff1.scope - libcontainer container e3d31aac0d434443f226b355f09cb4537858693140e7a4b71ceae1c721518ff1. Feb 13 19:43:05.941901 containerd[1467]: time="2025-02-13T19:43:05.941841526Z" level=info msg="StartContainer for \"e3d31aac0d434443f226b355f09cb4537858693140e7a4b71ceae1c721518ff1\" returns successfully" Feb 13 19:43:06.096805 kubelet[2656]: I0213 19:43:06.096751 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78bc8bfdb7-xfh2s" podStartSLOduration=25.663380311 podStartE2EDuration="33.096733558s" podCreationTimestamp="2025-02-13 19:42:33 +0000 UTC" firstStartedPulling="2025-02-13 19:42:57.893813221 +0000 UTC m=+44.266082774" lastFinishedPulling="2025-02-13 19:43:05.327166448 +0000 UTC m=+51.699436021" observedRunningTime="2025-02-13 19:43:06.096652431 +0000 UTC m=+52.468921984" watchObservedRunningTime="2025-02-13 19:43:06.096733558 +0000 UTC m=+52.469003111" Feb 13 19:43:06.155702 kubelet[2656]: I0213 19:43:06.155576 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7f9ffdc98-9gqpq" podStartSLOduration=25.490125913 podStartE2EDuration="33.155555357s" podCreationTimestamp="2025-02-13 19:42:33 +0000 UTC" firstStartedPulling="2025-02-13 19:42:58.126358803 +0000 UTC m=+44.498628356" lastFinishedPulling="2025-02-13 19:43:05.791788237 +0000 UTC m=+52.164057800" observedRunningTime="2025-02-13 19:43:06.105342945 +0000 UTC m=+52.477612498" watchObservedRunningTime="2025-02-13 19:43:06.155555357 +0000 UTC m=+52.527824910" Feb 13 19:43:07.090751 kubelet[2656]: I0213 19:43:07.090705 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:43:07.461738 containerd[1467]: time="2025-02-13T19:43:07.461557200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:07.462731 containerd[1467]: time="2025-02-13T19:43:07.462390252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:43:07.463663 containerd[1467]: time="2025-02-13T19:43:07.463610835Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:07.466748 containerd[1467]: time="2025-02-13T19:43:07.465838686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:43:07.466748 containerd[1467]: time="2025-02-13T19:43:07.466481280Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.674478879s" Feb 13 19:43:07.466748 containerd[1467]: time="2025-02-13T19:43:07.466529894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:43:07.468618 containerd[1467]: time="2025-02-13T19:43:07.468588428Z" level=info msg="CreateContainer within sandbox \"16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:43:07.486208 containerd[1467]: time="2025-02-13T19:43:07.486166506Z" level=info msg="CreateContainer within sandbox \"16e870efbf5bcea8be2c39af9c6d06ea91b8ef3fe1b0c4956af44e1851f1de85\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c7bd4a94e24b67242a471b6017b348801d548ab793ede7b788bdcded08bb9921\"" Feb 13 19:43:07.486766 containerd[1467]: time="2025-02-13T19:43:07.486679569Z" level=info msg="StartContainer for \"c7bd4a94e24b67242a471b6017b348801d548ab793ede7b788bdcded08bb9921\"" Feb 13 19:43:07.528650 systemd[1]: Started cri-containerd-c7bd4a94e24b67242a471b6017b348801d548ab793ede7b788bdcded08bb9921.scope - libcontainer container c7bd4a94e24b67242a471b6017b348801d548ab793ede7b788bdcded08bb9921. Feb 13 19:43:07.564138 containerd[1467]: time="2025-02-13T19:43:07.564067759Z" level=info msg="StartContainer for \"c7bd4a94e24b67242a471b6017b348801d548ab793ede7b788bdcded08bb9921\" returns successfully" Feb 13 19:43:07.773692 kubelet[2656]: I0213 19:43:07.773575 2656 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:43:07.773692 kubelet[2656]: I0213 19:43:07.773608 2656 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:43:08.145812 kubelet[2656]: I0213 19:43:08.145747 2656 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qxpzd" podStartSLOduration=25.547628282 podStartE2EDuration="35.145718666s" podCreationTimestamp="2025-02-13 19:42:33 +0000 UTC" firstStartedPulling="2025-02-13 19:42:57.869107032 +0000 UTC m=+44.241376585" lastFinishedPulling="2025-02-13 19:43:07.467197416 +0000 UTC m=+53.839466969" observedRunningTime="2025-02-13 19:43:08.145012932 +0000 UTC m=+54.517282485" watchObservedRunningTime="2025-02-13 19:43:08.145718666 +0000 UTC m=+54.517988219" Feb 13 19:43:09.827667 systemd[1]: Started sshd@16-10.0.0.106:22-10.0.0.1:49840.service - OpenSSH per-connection server daemon (10.0.0.1:49840). Feb 13 19:43:09.882474 sshd[6108]: Accepted publickey for core from 10.0.0.1 port 49840 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:09.883962 sshd-session[6108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:09.887734 systemd-logind[1449]: New session 17 of user core. Feb 13 19:43:09.896611 systemd[1]: Started session-17.scope - Session 17 of User core. Feb 13 19:43:10.012400 sshd[6110]: Connection closed by 10.0.0.1 port 49840 Feb 13 19:43:10.012874 sshd-session[6108]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:10.021216 systemd[1]: sshd@16-10.0.0.106:22-10.0.0.1:49840.service: Deactivated successfully. Feb 13 19:43:10.023000 systemd[1]: session-17.scope: Deactivated successfully. Feb 13 19:43:10.024382 systemd-logind[1449]: Session 17 logged out. Waiting for processes to exit. Feb 13 19:43:10.031768 systemd[1]: Started sshd@17-10.0.0.106:22-10.0.0.1:49842.service - OpenSSH per-connection server daemon (10.0.0.1:49842). Feb 13 19:43:10.032661 systemd-logind[1449]: Removed session 17. Feb 13 19:43:10.065592 sshd[6123]: Accepted publickey for core from 10.0.0.1 port 49842 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:10.066919 sshd-session[6123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:10.070575 systemd-logind[1449]: New session 18 of user core. Feb 13 19:43:10.081618 systemd[1]: Started session-18.scope - Session 18 of User core. Feb 13 19:43:10.261603 sshd[6125]: Connection closed by 10.0.0.1 port 49842 Feb 13 19:43:10.262007 sshd-session[6123]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:10.276426 systemd[1]: sshd@17-10.0.0.106:22-10.0.0.1:49842.service: Deactivated successfully. Feb 13 19:43:10.278194 systemd[1]: session-18.scope: Deactivated successfully. Feb 13 19:43:10.279659 systemd-logind[1449]: Session 18 logged out. Waiting for processes to exit. Feb 13 19:43:10.281082 systemd[1]: Started sshd@18-10.0.0.106:22-10.0.0.1:49852.service - OpenSSH per-connection server daemon (10.0.0.1:49852). Feb 13 19:43:10.281989 systemd-logind[1449]: Removed session 18. Feb 13 19:43:10.330448 sshd[6136]: Accepted publickey for core from 10.0.0.1 port 49852 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:10.331760 sshd-session[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:10.335707 systemd-logind[1449]: New session 19 of user core. Feb 13 19:43:10.344653 systemd[1]: Started session-19.scope - Session 19 of User core. Feb 13 19:43:11.791493 sshd[6138]: Connection closed by 10.0.0.1 port 49852 Feb 13 19:43:11.792278 sshd-session[6136]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:11.803652 systemd[1]: sshd@18-10.0.0.106:22-10.0.0.1:49852.service: Deactivated successfully. Feb 13 19:43:11.806101 systemd[1]: session-19.scope: Deactivated successfully. Feb 13 19:43:11.807800 systemd-logind[1449]: Session 19 logged out. Waiting for processes to exit. Feb 13 19:43:11.815824 systemd[1]: Started sshd@19-10.0.0.106:22-10.0.0.1:49868.service - OpenSSH per-connection server daemon (10.0.0.1:49868). Feb 13 19:43:11.816820 systemd-logind[1449]: Removed session 19. Feb 13 19:43:11.854219 sshd[6156]: Accepted publickey for core from 10.0.0.1 port 49868 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:11.855676 sshd-session[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:11.859830 systemd-logind[1449]: New session 20 of user core. Feb 13 19:43:11.877669 systemd[1]: Started session-20.scope - Session 20 of User core. Feb 13 19:43:12.085614 sshd[6158]: Connection closed by 10.0.0.1 port 49868 Feb 13 19:43:12.085992 sshd-session[6156]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:12.095658 systemd[1]: sshd@19-10.0.0.106:22-10.0.0.1:49868.service: Deactivated successfully. Feb 13 19:43:12.097634 systemd[1]: session-20.scope: Deactivated successfully. Feb 13 19:43:12.098994 systemd-logind[1449]: Session 20 logged out. Waiting for processes to exit. Feb 13 19:43:12.100265 systemd[1]: Started sshd@20-10.0.0.106:22-10.0.0.1:49874.service - OpenSSH per-connection server daemon (10.0.0.1:49874). Feb 13 19:43:12.101062 systemd-logind[1449]: Removed session 20. Feb 13 19:43:12.139744 sshd[6168]: Accepted publickey for core from 10.0.0.1 port 49874 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:12.141381 sshd-session[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:12.145736 systemd-logind[1449]: New session 21 of user core. Feb 13 19:43:12.157637 systemd[1]: Started session-21.scope - Session 21 of User core. Feb 13 19:43:12.273701 sshd[6170]: Connection closed by 10.0.0.1 port 49874 Feb 13 19:43:12.274361 sshd-session[6168]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:12.277650 systemd[1]: sshd@20-10.0.0.106:22-10.0.0.1:49874.service: Deactivated successfully. Feb 13 19:43:12.279759 systemd[1]: session-21.scope: Deactivated successfully. Feb 13 19:43:12.281370 systemd-logind[1449]: Session 21 logged out. Waiting for processes to exit. Feb 13 19:43:12.282525 systemd-logind[1449]: Removed session 21. Feb 13 19:43:13.697057 containerd[1467]: time="2025-02-13T19:43:13.696974598Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:43:13.697581 containerd[1467]: time="2025-02-13T19:43:13.697106713Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:43:13.697581 containerd[1467]: time="2025-02-13T19:43:13.697120829Z" level=info msg="StopPodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:43:13.706116 containerd[1467]: time="2025-02-13T19:43:13.706085857Z" level=info msg="RemovePodSandbox for \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:43:13.719715 containerd[1467]: time="2025-02-13T19:43:13.719682695Z" level=info msg="Forcibly stopping sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\"" Feb 13 19:43:13.719840 containerd[1467]: time="2025-02-13T19:43:13.719789171Z" level=info msg="TearDown network for sandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" successfully" Feb 13 19:43:13.729625 containerd[1467]: time="2025-02-13T19:43:13.729564479Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.729716 containerd[1467]: time="2025-02-13T19:43:13.729686865Z" level=info msg="RemovePodSandbox \"15f5f0073d3a00815e2691e423b79fb9f7b74ca3dca7c8fc2f5ee74434ddc4d5\" returns successfully" Feb 13 19:43:13.730238 containerd[1467]: time="2025-02-13T19:43:13.730205233Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:43:13.730377 containerd[1467]: time="2025-02-13T19:43:13.730349291Z" level=info msg="TearDown network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" successfully" Feb 13 19:43:13.730377 containerd[1467]: time="2025-02-13T19:43:13.730364660Z" level=info msg="StopPodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" returns successfully" Feb 13 19:43:13.730644 containerd[1467]: time="2025-02-13T19:43:13.730596366Z" level=info msg="RemovePodSandbox for \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:43:13.730644 containerd[1467]: time="2025-02-13T19:43:13.730622827Z" level=info msg="Forcibly stopping sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\"" Feb 13 19:43:13.730755 containerd[1467]: time="2025-02-13T19:43:13.730683445Z" level=info msg="TearDown network for sandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" successfully" Feb 13 19:43:13.793523 containerd[1467]: time="2025-02-13T19:43:13.793360196Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.793523 containerd[1467]: time="2025-02-13T19:43:13.793450038Z" level=info msg="RemovePodSandbox \"4a5ec2259b92339e47107aa67428fea37834d2811a71b36beb4f666a45ba9d2d\" returns successfully" Feb 13 19:43:13.794187 containerd[1467]: time="2025-02-13T19:43:13.793983977Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" Feb 13 19:43:13.794187 containerd[1467]: time="2025-02-13T19:43:13.794093007Z" level=info msg="TearDown network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" successfully" Feb 13 19:43:13.794187 containerd[1467]: time="2025-02-13T19:43:13.794150298Z" level=info msg="StopPodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" returns successfully" Feb 13 19:43:13.794668 containerd[1467]: time="2025-02-13T19:43:13.794423634Z" level=info msg="RemovePodSandbox for \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" Feb 13 19:43:13.794668 containerd[1467]: time="2025-02-13T19:43:13.794445426Z" level=info msg="Forcibly stopping sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\"" Feb 13 19:43:13.794668 containerd[1467]: time="2025-02-13T19:43:13.794546340Z" level=info msg="TearDown network for sandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" successfully" Feb 13 19:43:13.801909 containerd[1467]: time="2025-02-13T19:43:13.801826422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.801909 containerd[1467]: time="2025-02-13T19:43:13.801876539Z" level=info msg="RemovePodSandbox \"d44e7c809ab337b67b18df9c61d5caa6402c1cc76d619d60e5d1d6da3800eaec\" returns successfully" Feb 13 19:43:13.802372 containerd[1467]: time="2025-02-13T19:43:13.802335453Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\"" Feb 13 19:43:13.802470 containerd[1467]: time="2025-02-13T19:43:13.802431889Z" level=info msg="TearDown network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" successfully" Feb 13 19:43:13.802470 containerd[1467]: time="2025-02-13T19:43:13.802467818Z" level=info msg="StopPodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" returns successfully" Feb 13 19:43:13.802731 containerd[1467]: time="2025-02-13T19:43:13.802714312Z" level=info msg="RemovePodSandbox for \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\"" Feb 13 19:43:13.802773 containerd[1467]: time="2025-02-13T19:43:13.802733369Z" level=info msg="Forcibly stopping sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\"" Feb 13 19:43:13.802881 containerd[1467]: time="2025-02-13T19:43:13.802796641Z" level=info msg="TearDown network for sandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" successfully" Feb 13 19:43:13.840485 containerd[1467]: time="2025-02-13T19:43:13.840412694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.840700 containerd[1467]: time="2025-02-13T19:43:13.840583342Z" level=info msg="RemovePodSandbox \"e3e45c614369b5fa892d59c6ed2c11a47ce68f02eafdafdcecd31a42bec5e8ee\" returns successfully" Feb 13 19:43:13.841211 containerd[1467]: time="2025-02-13T19:43:13.841186915Z" level=info msg="StopPodSandbox for \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\"" Feb 13 19:43:13.841337 containerd[1467]: time="2025-02-13T19:43:13.841292789Z" level=info msg="TearDown network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" successfully" Feb 13 19:43:13.841337 containerd[1467]: time="2025-02-13T19:43:13.841304742Z" level=info msg="StopPodSandbox for \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" returns successfully" Feb 13 19:43:13.841796 containerd[1467]: time="2025-02-13T19:43:13.841661068Z" level=info msg="RemovePodSandbox for \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\"" Feb 13 19:43:13.841796 containerd[1467]: time="2025-02-13T19:43:13.841688301Z" level=info msg="Forcibly stopping sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\"" Feb 13 19:43:13.841883 containerd[1467]: time="2025-02-13T19:43:13.841775017Z" level=info msg="TearDown network for sandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" successfully" Feb 13 19:43:13.846376 containerd[1467]: time="2025-02-13T19:43:13.846342805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.846457 containerd[1467]: time="2025-02-13T19:43:13.846396689Z" level=info msg="RemovePodSandbox \"9838f93d930aefed276fcdd92fb7e1c1e9305a68265611e2d47cd7a2370c6bb8\" returns successfully" Feb 13 19:43:13.846701 containerd[1467]: time="2025-02-13T19:43:13.846671669Z" level=info msg="StopPodSandbox for \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\"" Feb 13 19:43:13.846786 containerd[1467]: time="2025-02-13T19:43:13.846754528Z" level=info msg="TearDown network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\" successfully" Feb 13 19:43:13.846786 containerd[1467]: time="2025-02-13T19:43:13.846766522Z" level=info msg="StopPodSandbox for \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\" returns successfully" Feb 13 19:43:13.847040 containerd[1467]: time="2025-02-13T19:43:13.846976015Z" level=info msg="RemovePodSandbox for \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\"" Feb 13 19:43:13.847040 containerd[1467]: time="2025-02-13T19:43:13.847007436Z" level=info msg="Forcibly stopping sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\"" Feb 13 19:43:13.847128 containerd[1467]: time="2025-02-13T19:43:13.847087740Z" level=info msg="TearDown network for sandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\" successfully" Feb 13 19:43:13.850990 containerd[1467]: time="2025-02-13T19:43:13.850953657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.851034 containerd[1467]: time="2025-02-13T19:43:13.851000777Z" level=info msg="RemovePodSandbox \"2253b7e3c451b5ca60a2cae78b04149aadea7f9c28994679d78fafb9255616ae\" returns successfully" Feb 13 19:43:13.851332 containerd[1467]: time="2025-02-13T19:43:13.851311606Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:43:13.851437 containerd[1467]: time="2025-02-13T19:43:13.851412500Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:43:13.851437 containerd[1467]: time="2025-02-13T19:43:13.851428712Z" level=info msg="StopPodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:43:13.851703 containerd[1467]: time="2025-02-13T19:43:13.851671579Z" level=info msg="RemovePodSandbox for \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:43:13.851703 containerd[1467]: time="2025-02-13T19:43:13.851695745Z" level=info msg="Forcibly stopping sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\"" Feb 13 19:43:13.851815 containerd[1467]: time="2025-02-13T19:43:13.851773716Z" level=info msg="TearDown network for sandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" successfully" Feb 13 19:43:13.855142 containerd[1467]: time="2025-02-13T19:43:13.855106435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.855315 containerd[1467]: time="2025-02-13T19:43:13.855145410Z" level=info msg="RemovePodSandbox \"3782781afd896cb7cf8a44c27fcb5a6dbd5316fbd92805b724a8e2c5adbac911\" returns successfully" Feb 13 19:43:13.855588 containerd[1467]: time="2025-02-13T19:43:13.855540071Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:43:13.855736 containerd[1467]: time="2025-02-13T19:43:13.855706701Z" level=info msg="TearDown network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" successfully" Feb 13 19:43:13.855736 containerd[1467]: time="2025-02-13T19:43:13.855729535Z" level=info msg="StopPodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" returns successfully" Feb 13 19:43:13.856019 containerd[1467]: time="2025-02-13T19:43:13.855998343Z" level=info msg="RemovePodSandbox for \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:43:13.856081 containerd[1467]: time="2025-02-13T19:43:13.856021678Z" level=info msg="Forcibly stopping sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\"" Feb 13 19:43:13.856125 containerd[1467]: time="2025-02-13T19:43:13.856088146Z" level=info msg="TearDown network for sandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" successfully" Feb 13 19:43:13.859395 containerd[1467]: time="2025-02-13T19:43:13.859355700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.859449 containerd[1467]: time="2025-02-13T19:43:13.859398502Z" level=info msg="RemovePodSandbox \"f2df588c3d5c7137f1b9c17c9dae5d07407ef6a7c0ff67a6a0358da645106646\" returns successfully" Feb 13 19:43:13.859805 containerd[1467]: time="2025-02-13T19:43:13.859685405Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" Feb 13 19:43:13.859805 containerd[1467]: time="2025-02-13T19:43:13.859800085Z" level=info msg="TearDown network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" successfully" Feb 13 19:43:13.859872 containerd[1467]: time="2025-02-13T19:43:13.859814042Z" level=info msg="StopPodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" returns successfully" Feb 13 19:43:13.860102 containerd[1467]: time="2025-02-13T19:43:13.860057962Z" level=info msg="RemovePodSandbox for \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" Feb 13 19:43:13.860102 containerd[1467]: time="2025-02-13T19:43:13.860085055Z" level=info msg="Forcibly stopping sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\"" Feb 13 19:43:13.860457 containerd[1467]: time="2025-02-13T19:43:13.860152714Z" level=info msg="TearDown network for sandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" successfully" Feb 13 19:43:13.863509 containerd[1467]: time="2025-02-13T19:43:13.863458752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.863555 containerd[1467]: time="2025-02-13T19:43:13.863512005Z" level=info msg="RemovePodSandbox \"476eccaf99bc0f4eac505936b7325e87844524bc2c76bb506c6e391d40d89461\" returns successfully" Feb 13 19:43:13.863757 containerd[1467]: time="2025-02-13T19:43:13.863735996Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\"" Feb 13 19:43:13.863832 containerd[1467]: time="2025-02-13T19:43:13.863815490Z" level=info msg="TearDown network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" successfully" Feb 13 19:43:13.863832 containerd[1467]: time="2025-02-13T19:43:13.863828915Z" level=info msg="StopPodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" returns successfully" Feb 13 19:43:13.864072 containerd[1467]: time="2025-02-13T19:43:13.864046073Z" level=info msg="RemovePodSandbox for \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\"" Feb 13 19:43:13.864072 containerd[1467]: time="2025-02-13T19:43:13.864066583Z" level=info msg="Forcibly stopping sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\"" Feb 13 19:43:13.864163 containerd[1467]: time="2025-02-13T19:43:13.864132971Z" level=info msg="TearDown network for sandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" successfully" Feb 13 19:43:13.867453 containerd[1467]: time="2025-02-13T19:43:13.867414321Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.867453 containerd[1467]: time="2025-02-13T19:43:13.867448156Z" level=info msg="RemovePodSandbox \"d33b63469e8e0eb1922ed670a85cd121776c55f33c9094574816b7c1ea0694de\" returns successfully" Feb 13 19:43:13.867697 containerd[1467]: time="2025-02-13T19:43:13.867663511Z" level=info msg="StopPodSandbox for \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\"" Feb 13 19:43:13.867782 containerd[1467]: time="2025-02-13T19:43:13.867762321Z" level=info msg="TearDown network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" successfully" Feb 13 19:43:13.867821 containerd[1467]: time="2025-02-13T19:43:13.867779054Z" level=info msg="StopPodSandbox for \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" returns successfully" Feb 13 19:43:13.868022 containerd[1467]: time="2025-02-13T19:43:13.867991643Z" level=info msg="RemovePodSandbox for \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\"" Feb 13 19:43:13.868022 containerd[1467]: time="2025-02-13T19:43:13.868015569Z" level=info msg="Forcibly stopping sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\"" Feb 13 19:43:13.868128 containerd[1467]: time="2025-02-13T19:43:13.868093389Z" level=info msg="TearDown network for sandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" successfully" Feb 13 19:43:13.871555 containerd[1467]: time="2025-02-13T19:43:13.871494450Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.871663 containerd[1467]: time="2025-02-13T19:43:13.871569094Z" level=info msg="RemovePodSandbox \"dba9abf48b93dd0a6ebe5705ff62ca1e13f9f106531eac199271ce793b20325e\" returns successfully" Feb 13 19:43:13.871886 containerd[1467]: time="2025-02-13T19:43:13.871860374Z" level=info msg="StopPodSandbox for \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\"" Feb 13 19:43:13.871986 containerd[1467]: time="2025-02-13T19:43:13.871971118Z" level=info msg="TearDown network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\" successfully" Feb 13 19:43:13.872015 containerd[1467]: time="2025-02-13T19:43:13.871985005Z" level=info msg="StopPodSandbox for \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\" returns successfully" Feb 13 19:43:13.872291 containerd[1467]: time="2025-02-13T19:43:13.872255286Z" level=info msg="RemovePodSandbox for \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\"" Feb 13 19:43:13.872291 containerd[1467]: time="2025-02-13T19:43:13.872283269Z" level=info msg="Forcibly stopping sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\"" Feb 13 19:43:13.872393 containerd[1467]: time="2025-02-13T19:43:13.872355499Z" level=info msg="TearDown network for sandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\" successfully" Feb 13 19:43:13.876028 containerd[1467]: time="2025-02-13T19:43:13.875968397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.876028 containerd[1467]: time="2025-02-13T19:43:13.876010227Z" level=info msg="RemovePodSandbox \"eba9a6b214974d09c896d8d0afab09d1ea23e394175225f724f1cf6f9a1981f3\" returns successfully" Feb 13 19:43:13.876458 containerd[1467]: time="2025-02-13T19:43:13.876416671Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:43:13.876669 containerd[1467]: time="2025-02-13T19:43:13.876598891Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:43:13.876669 containerd[1467]: time="2025-02-13T19:43:13.876662004Z" level=info msg="StopPodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:43:13.876970 containerd[1467]: time="2025-02-13T19:43:13.876944668Z" level=info msg="RemovePodSandbox for \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:43:13.877030 containerd[1467]: time="2025-02-13T19:43:13.876973713Z" level=info msg="Forcibly stopping sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\"" Feb 13 19:43:13.877108 containerd[1467]: time="2025-02-13T19:43:13.877086531Z" level=info msg="TearDown network for sandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" successfully" Feb 13 19:43:13.880628 containerd[1467]: time="2025-02-13T19:43:13.880606721Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.880699 containerd[1467]: time="2025-02-13T19:43:13.880639795Z" level=info msg="RemovePodSandbox \"1595aba60392b63c1553a8901e090a43d78b0dd2f506eb2d65ada601c8d5441c\" returns successfully" Feb 13 19:43:13.880905 containerd[1467]: time="2025-02-13T19:43:13.880835662Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:43:13.880960 containerd[1467]: time="2025-02-13T19:43:13.880928601Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:43:13.880960 containerd[1467]: time="2025-02-13T19:43:13.880938711Z" level=info msg="StopPodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:43:13.881536 containerd[1467]: time="2025-02-13T19:43:13.881141090Z" level=info msg="RemovePodSandbox for \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:43:13.881536 containerd[1467]: time="2025-02-13T19:43:13.881163343Z" level=info msg="Forcibly stopping sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\"" Feb 13 19:43:13.881536 containerd[1467]: time="2025-02-13T19:43:13.881226646Z" level=info msg="TearDown network for sandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" successfully" Feb 13 19:43:13.885359 containerd[1467]: time="2025-02-13T19:43:13.885318717Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.885431 containerd[1467]: time="2025-02-13T19:43:13.885396848Z" level=info msg="RemovePodSandbox \"05ebdeb5f9509ab11f0283d09aded299f309ee75b4f923bc2617f708e6b6b23e\" returns successfully" Feb 13 19:43:13.885748 containerd[1467]: time="2025-02-13T19:43:13.885678059Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:43:13.885818 containerd[1467]: time="2025-02-13T19:43:13.885793842Z" level=info msg="TearDown network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" successfully" Feb 13 19:43:13.885818 containerd[1467]: time="2025-02-13T19:43:13.885815514Z" level=info msg="StopPodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" returns successfully" Feb 13 19:43:13.886355 containerd[1467]: time="2025-02-13T19:43:13.886261233Z" level=info msg="RemovePodSandbox for \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:43:13.886355 containerd[1467]: time="2025-02-13T19:43:13.886289316Z" level=info msg="Forcibly stopping sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\"" Feb 13 19:43:13.886456 containerd[1467]: time="2025-02-13T19:43:13.886367537Z" level=info msg="TearDown network for sandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" successfully" Feb 13 19:43:13.890769 containerd[1467]: time="2025-02-13T19:43:13.890740089Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.890842 containerd[1467]: time="2025-02-13T19:43:13.890796227Z" level=info msg="RemovePodSandbox \"acfb1b7eecc75b14a57b843858b94fa8648758738a7e7d91cd2473dc2127b7ba\" returns successfully" Feb 13 19:43:13.891162 containerd[1467]: time="2025-02-13T19:43:13.891091806Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" Feb 13 19:43:13.891210 containerd[1467]: time="2025-02-13T19:43:13.891190196Z" level=info msg="TearDown network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" successfully" Feb 13 19:43:13.891210 containerd[1467]: time="2025-02-13T19:43:13.891198893Z" level=info msg="StopPodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" returns successfully" Feb 13 19:43:13.891669 containerd[1467]: time="2025-02-13T19:43:13.891556271Z" level=info msg="RemovePodSandbox for \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" Feb 13 19:43:13.891669 containerd[1467]: time="2025-02-13T19:43:13.891594655Z" level=info msg="Forcibly stopping sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\"" Feb 13 19:43:13.891750 containerd[1467]: time="2025-02-13T19:43:13.891713143Z" level=info msg="TearDown network for sandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" successfully" Feb 13 19:43:13.896115 containerd[1467]: time="2025-02-13T19:43:13.896064254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.896115 containerd[1467]: time="2025-02-13T19:43:13.896110352Z" level=info msg="RemovePodSandbox \"113db1c2c05056763e1dcab40407dfe511d3b110c8de33b9c0bb45d929623fde\" returns successfully" Feb 13 19:43:13.896411 containerd[1467]: time="2025-02-13T19:43:13.896377457Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\"" Feb 13 19:43:13.896544 containerd[1467]: time="2025-02-13T19:43:13.896521504Z" level=info msg="TearDown network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" successfully" Feb 13 19:43:13.896544 containerd[1467]: time="2025-02-13T19:43:13.896539720Z" level=info msg="StopPodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" returns successfully" Feb 13 19:43:13.896805 containerd[1467]: time="2025-02-13T19:43:13.896784742Z" level=info msg="RemovePodSandbox for \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\"" Feb 13 19:43:13.896973 containerd[1467]: time="2025-02-13T19:43:13.896807526Z" level=info msg="Forcibly stopping sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\"" Feb 13 19:43:13.896973 containerd[1467]: time="2025-02-13T19:43:13.896877360Z" level=info msg="TearDown network for sandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" successfully" Feb 13 19:43:13.901135 containerd[1467]: time="2025-02-13T19:43:13.901097418Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.901205 containerd[1467]: time="2025-02-13T19:43:13.901145561Z" level=info msg="RemovePodSandbox \"65c35f2d8b43c13374f37f5c29a3fd27e0a31b404307e17dc87790accc6c0a7f\" returns successfully" Feb 13 19:43:13.901710 containerd[1467]: time="2025-02-13T19:43:13.901571522Z" level=info msg="StopPodSandbox for \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\"" Feb 13 19:43:13.901710 containerd[1467]: time="2025-02-13T19:43:13.901661164Z" level=info msg="TearDown network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" successfully" Feb 13 19:43:13.901710 containerd[1467]: time="2025-02-13T19:43:13.901670362Z" level=info msg="StopPodSandbox for \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" returns successfully" Feb 13 19:43:13.902038 containerd[1467]: time="2025-02-13T19:43:13.902014985Z" level=info msg="RemovePodSandbox for \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\"" Feb 13 19:43:13.902081 containerd[1467]: time="2025-02-13T19:43:13.902040645Z" level=info msg="Forcibly stopping sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\"" Feb 13 19:43:13.902167 containerd[1467]: time="2025-02-13T19:43:13.902119267Z" level=info msg="TearDown network for sandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" successfully" Feb 13 19:43:13.906558 containerd[1467]: time="2025-02-13T19:43:13.906530613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.906632 containerd[1467]: time="2025-02-13T19:43:13.906580590Z" level=info msg="RemovePodSandbox \"babd1eaaf9746dd4c1ef3ded5a20d28289a6af4a88fce8e253910ffc638e1b8d\" returns successfully" Feb 13 19:43:13.906843 containerd[1467]: time="2025-02-13T19:43:13.906821193Z" level=info msg="StopPodSandbox for \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\"" Feb 13 19:43:13.906931 containerd[1467]: time="2025-02-13T19:43:13.906905185Z" level=info msg="TearDown network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\" successfully" Feb 13 19:43:13.906931 containerd[1467]: time="2025-02-13T19:43:13.906925614Z" level=info msg="StopPodSandbox for \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\" returns successfully" Feb 13 19:43:13.907308 containerd[1467]: time="2025-02-13T19:43:13.907250560Z" level=info msg="RemovePodSandbox for \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\"" Feb 13 19:43:13.907308 containerd[1467]: time="2025-02-13T19:43:13.907290086Z" level=info msg="Forcibly stopping sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\"" Feb 13 19:43:13.907432 containerd[1467]: time="2025-02-13T19:43:13.907381542Z" level=info msg="TearDown network for sandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\" successfully" Feb 13 19:43:13.912081 containerd[1467]: time="2025-02-13T19:43:13.912041658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.912138 containerd[1467]: time="2025-02-13T19:43:13.912100581Z" level=info msg="RemovePodSandbox \"c0bcbde6606c83357aa8f950e49ccff144c85b107d7599959f012dd4e3333606\" returns successfully" Feb 13 19:43:13.912430 containerd[1467]: time="2025-02-13T19:43:13.912391752Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:43:13.912531 containerd[1467]: time="2025-02-13T19:43:13.912491464Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:43:13.912558 containerd[1467]: time="2025-02-13T19:43:13.912531682Z" level=info msg="StopPodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:43:13.912807 containerd[1467]: time="2025-02-13T19:43:13.912782725Z" level=info msg="RemovePodSandbox for \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:43:13.912849 containerd[1467]: time="2025-02-13T19:43:13.912814286Z" level=info msg="Forcibly stopping sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\"" Feb 13 19:43:13.912951 containerd[1467]: time="2025-02-13T19:43:13.912899520Z" level=info msg="TearDown network for sandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" successfully" Feb 13 19:43:13.916891 containerd[1467]: time="2025-02-13T19:43:13.916844979Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.916891 containerd[1467]: time="2025-02-13T19:43:13.916884736Z" level=info msg="RemovePodSandbox \"5722d221ab8268ff34fa92ffb089c74929c9052cb6a9b2a3f71f33fca2bec3e9\" returns successfully" Feb 13 19:43:13.917214 containerd[1467]: time="2025-02-13T19:43:13.917182309Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:43:13.917272 containerd[1467]: time="2025-02-13T19:43:13.917263837Z" level=info msg="TearDown network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" successfully" Feb 13 19:43:13.917304 containerd[1467]: time="2025-02-13T19:43:13.917274216Z" level=info msg="StopPodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" returns successfully" Feb 13 19:43:13.917528 containerd[1467]: time="2025-02-13T19:43:13.917487968Z" level=info msg="RemovePodSandbox for \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:43:13.917568 containerd[1467]: time="2025-02-13T19:43:13.917534337Z" level=info msg="Forcibly stopping sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\"" Feb 13 19:43:13.917657 containerd[1467]: time="2025-02-13T19:43:13.917618299Z" level=info msg="TearDown network for sandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" successfully" Feb 13 19:43:13.921247 containerd[1467]: time="2025-02-13T19:43:13.921217031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.921316 containerd[1467]: time="2025-02-13T19:43:13.921261216Z" level=info msg="RemovePodSandbox \"b2c10597fcbe95ccbb9f31aac916f6313c7fdfbebe339a656d49f8fd5908264e\" returns successfully" Feb 13 19:43:13.921559 containerd[1467]: time="2025-02-13T19:43:13.921525865Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" Feb 13 19:43:13.921666 containerd[1467]: time="2025-02-13T19:43:13.921629415Z" level=info msg="TearDown network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" successfully" Feb 13 19:43:13.921666 containerd[1467]: time="2025-02-13T19:43:13.921647690Z" level=info msg="StopPodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" returns successfully" Feb 13 19:43:13.921900 containerd[1467]: time="2025-02-13T19:43:13.921875459Z" level=info msg="RemovePodSandbox for \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" Feb 13 19:43:13.921900 containerd[1467]: time="2025-02-13T19:43:13.921898814Z" level=info msg="Forcibly stopping sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\"" Feb 13 19:43:13.922004 containerd[1467]: time="2025-02-13T19:43:13.921973407Z" level=info msg="TearDown network for sandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" successfully" Feb 13 19:43:13.925424 containerd[1467]: time="2025-02-13T19:43:13.925393495Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.925474 containerd[1467]: time="2025-02-13T19:43:13.925424916Z" level=info msg="RemovePodSandbox \"e421dd80bdff41e3771b21b9aa765d004d27633c2cb4690b7b118748dd6d6fe3\" returns successfully" Feb 13 19:43:13.925707 containerd[1467]: time="2025-02-13T19:43:13.925681310Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\"" Feb 13 19:43:13.925784 containerd[1467]: time="2025-02-13T19:43:13.925767585Z" level=info msg="TearDown network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" successfully" Feb 13 19:43:13.925815 containerd[1467]: time="2025-02-13T19:43:13.925783727Z" level=info msg="StopPodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" returns successfully" Feb 13 19:43:13.926126 containerd[1467]: time="2025-02-13T19:43:13.926097892Z" level=info msg="RemovePodSandbox for \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\"" Feb 13 19:43:13.926126 containerd[1467]: time="2025-02-13T19:43:13.926118862Z" level=info msg="Forcibly stopping sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\"" Feb 13 19:43:13.926222 containerd[1467]: time="2025-02-13T19:43:13.926192002Z" level=info msg="TearDown network for sandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" successfully" Feb 13 19:43:13.929711 containerd[1467]: time="2025-02-13T19:43:13.929678468Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.929763 containerd[1467]: time="2025-02-13T19:43:13.929725398Z" level=info msg="RemovePodSandbox \"b753c98113d0daab22b5b0d48c1bf46fc2bd90d7b432f61d98419e4e686a742e\" returns successfully" Feb 13 19:43:13.930023 containerd[1467]: time="2025-02-13T19:43:13.929995168Z" level=info msg="StopPodSandbox for \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\"" Feb 13 19:43:13.930110 containerd[1467]: time="2025-02-13T19:43:13.930092917Z" level=info msg="TearDown network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" successfully" Feb 13 19:43:13.930157 containerd[1467]: time="2025-02-13T19:43:13.930108367Z" level=info msg="StopPodSandbox for \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" returns successfully" Feb 13 19:43:13.930385 containerd[1467]: time="2025-02-13T19:43:13.930362235Z" level=info msg="RemovePodSandbox for \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\"" Feb 13 19:43:13.930422 containerd[1467]: time="2025-02-13T19:43:13.930389658Z" level=info msg="Forcibly stopping sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\"" Feb 13 19:43:13.930517 containerd[1467]: time="2025-02-13T19:43:13.930466606Z" level=info msg="TearDown network for sandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" successfully" Feb 13 19:43:13.933949 containerd[1467]: time="2025-02-13T19:43:13.933908165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.934003 containerd[1467]: time="2025-02-13T19:43:13.933950998Z" level=info msg="RemovePodSandbox \"7aca42da20fad12e567112a09368a612f4d46d569345c5b22a86eb8568dfba50\" returns successfully" Feb 13 19:43:13.934268 containerd[1467]: time="2025-02-13T19:43:13.934243972Z" level=info msg="StopPodSandbox for \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\"" Feb 13 19:43:13.934354 containerd[1467]: time="2025-02-13T19:43:13.934335869Z" level=info msg="TearDown network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\" successfully" Feb 13 19:43:13.934394 containerd[1467]: time="2025-02-13T19:43:13.934352340Z" level=info msg="StopPodSandbox for \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\" returns successfully" Feb 13 19:43:13.934620 containerd[1467]: time="2025-02-13T19:43:13.934587162Z" level=info msg="RemovePodSandbox for \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\"" Feb 13 19:43:13.934661 containerd[1467]: time="2025-02-13T19:43:13.934621809Z" level=info msg="Forcibly stopping sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\"" Feb 13 19:43:13.934748 containerd[1467]: time="2025-02-13T19:43:13.934709820Z" level=info msg="TearDown network for sandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\" successfully" Feb 13 19:43:13.939121 containerd[1467]: time="2025-02-13T19:43:13.939084846Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.939176 containerd[1467]: time="2025-02-13T19:43:13.939143409Z" level=info msg="RemovePodSandbox \"ddcbed9e32dc986efee624974339fd691561b1fe492259306265828c5b74843e\" returns successfully" Feb 13 19:43:13.939434 containerd[1467]: time="2025-02-13T19:43:13.939393972Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:43:13.939513 containerd[1467]: time="2025-02-13T19:43:13.939488553Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:43:13.939550 containerd[1467]: time="2025-02-13T19:43:13.939530935Z" level=info msg="StopPodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:43:13.939786 containerd[1467]: time="2025-02-13T19:43:13.939762481Z" level=info msg="RemovePodSandbox for \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:43:13.939838 containerd[1467]: time="2025-02-13T19:43:13.939786598Z" level=info msg="Forcibly stopping sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\"" Feb 13 19:43:13.939897 containerd[1467]: time="2025-02-13T19:43:13.939862834Z" level=info msg="TearDown network for sandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" successfully" Feb 13 19:43:13.943439 containerd[1467]: time="2025-02-13T19:43:13.943411770Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.943492 containerd[1467]: time="2025-02-13T19:43:13.943455524Z" level=info msg="RemovePodSandbox \"03abb776a14c6f8522f6625b44caf63f5bebc7398e5051702e3d9968abb0362e\" returns successfully" Feb 13 19:43:13.943808 containerd[1467]: time="2025-02-13T19:43:13.943771052Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:43:13.943908 containerd[1467]: time="2025-02-13T19:43:13.943875563Z" level=info msg="TearDown network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" successfully" Feb 13 19:43:13.943908 containerd[1467]: time="2025-02-13T19:43:13.943895020Z" level=info msg="StopPodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" returns successfully" Feb 13 19:43:13.944158 containerd[1467]: time="2025-02-13T19:43:13.944130514Z" level=info msg="RemovePodSandbox for \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:43:13.944202 containerd[1467]: time="2025-02-13T19:43:13.944161384Z" level=info msg="Forcibly stopping sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\"" Feb 13 19:43:13.944271 containerd[1467]: time="2025-02-13T19:43:13.944238883Z" level=info msg="TearDown network for sandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" successfully" Feb 13 19:43:13.947942 containerd[1467]: time="2025-02-13T19:43:13.947827935Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.947942 containerd[1467]: time="2025-02-13T19:43:13.947872362Z" level=info msg="RemovePodSandbox \"0692bb77aa8ee3afc089f45a0efb8192616eb703ec7b1a56f19564b86d119ca2\" returns successfully" Feb 13 19:43:13.948220 containerd[1467]: time="2025-02-13T19:43:13.948191606Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" Feb 13 19:43:13.948316 containerd[1467]: time="2025-02-13T19:43:13.948283394Z" level=info msg="TearDown network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" successfully" Feb 13 19:43:13.948316 containerd[1467]: time="2025-02-13T19:43:13.948297590Z" level=info msg="StopPodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" returns successfully" Feb 13 19:43:13.948893 containerd[1467]: time="2025-02-13T19:43:13.948865775Z" level=info msg="RemovePodSandbox for \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" Feb 13 19:43:13.948893 containerd[1467]: time="2025-02-13T19:43:13.948894000Z" level=info msg="Forcibly stopping sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\"" Feb 13 19:43:13.949072 containerd[1467]: time="2025-02-13T19:43:13.949029710Z" level=info msg="TearDown network for sandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" successfully" Feb 13 19:43:13.952943 containerd[1467]: time="2025-02-13T19:43:13.952909975Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.953007 containerd[1467]: time="2025-02-13T19:43:13.952965281Z" level=info msg="RemovePodSandbox \"93f4f84ab8cc75ce5e8c6bcd14e44730cf09bddee838a26c34e67bca1d10c69a\" returns successfully" Feb 13 19:43:13.953300 containerd[1467]: time="2025-02-13T19:43:13.953274005Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\"" Feb 13 19:43:13.953400 containerd[1467]: time="2025-02-13T19:43:13.953376312Z" level=info msg="TearDown network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" successfully" Feb 13 19:43:13.953400 containerd[1467]: time="2025-02-13T19:43:13.953392914Z" level=info msg="StopPodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" returns successfully" Feb 13 19:43:13.953727 containerd[1467]: time="2025-02-13T19:43:13.953697141Z" level=info msg="RemovePodSandbox for \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\"" Feb 13 19:43:13.953727 containerd[1467]: time="2025-02-13T19:43:13.953725305Z" level=info msg="Forcibly stopping sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\"" Feb 13 19:43:13.953824 containerd[1467]: time="2025-02-13T19:43:13.953800530Z" level=info msg="TearDown network for sandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" successfully" Feb 13 19:43:13.958595 containerd[1467]: time="2025-02-13T19:43:13.958564876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.958663 containerd[1467]: time="2025-02-13T19:43:13.958604162Z" level=info msg="RemovePodSandbox \"5717d4a7bf11003fbd11120e4e3de186575be86f95283ba329ec904ae917ec4c\" returns successfully" Feb 13 19:43:13.958882 containerd[1467]: time="2025-02-13T19:43:13.958857289Z" level=info msg="StopPodSandbox for \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\"" Feb 13 19:43:13.958968 containerd[1467]: time="2025-02-13T19:43:13.958949227Z" level=info msg="TearDown network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" successfully" Feb 13 19:43:13.958968 containerd[1467]: time="2025-02-13T19:43:13.958962281Z" level=info msg="StopPodSandbox for \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" returns successfully" Feb 13 19:43:13.959149 containerd[1467]: time="2025-02-13T19:43:13.959124715Z" level=info msg="RemovePodSandbox for \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\"" Feb 13 19:43:13.959149 containerd[1467]: time="2025-02-13T19:43:13.959144373Z" level=info msg="Forcibly stopping sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\"" Feb 13 19:43:13.959239 containerd[1467]: time="2025-02-13T19:43:13.959201914Z" level=info msg="TearDown network for sandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" successfully" Feb 13 19:43:13.962664 containerd[1467]: time="2025-02-13T19:43:13.962631379Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.962724 containerd[1467]: time="2025-02-13T19:43:13.962666076Z" level=info msg="RemovePodSandbox \"d35f28e953f02e761094fccf5bcecb7b39f4a3e96d088c35bcddd161efc3cea2\" returns successfully" Feb 13 19:43:13.962998 containerd[1467]: time="2025-02-13T19:43:13.962972465Z" level=info msg="StopPodSandbox for \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\"" Feb 13 19:43:13.963102 containerd[1467]: time="2025-02-13T19:43:13.963077278Z" level=info msg="TearDown network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\" successfully" Feb 13 19:43:13.963102 containerd[1467]: time="2025-02-13T19:43:13.963093088Z" level=info msg="StopPodSandbox for \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\" returns successfully" Feb 13 19:43:13.963446 containerd[1467]: time="2025-02-13T19:43:13.963420869Z" level=info msg="RemovePodSandbox for \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\"" Feb 13 19:43:13.963520 containerd[1467]: time="2025-02-13T19:43:13.963445416Z" level=info msg="Forcibly stopping sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\"" Feb 13 19:43:13.963596 containerd[1467]: time="2025-02-13T19:43:13.963552363Z" level=info msg="TearDown network for sandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\" successfully" Feb 13 19:43:13.967177 containerd[1467]: time="2025-02-13T19:43:13.967137879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.967229 containerd[1467]: time="2025-02-13T19:43:13.967179879Z" level=info msg="RemovePodSandbox \"b28f8d5d11c586fec9cf1d8aef8bfc92c012e00614d9cc75be530d04d5540753\" returns successfully" Feb 13 19:43:13.967434 containerd[1467]: time="2025-02-13T19:43:13.967407007Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:43:13.967576 containerd[1467]: time="2025-02-13T19:43:13.967525966Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:43:13.967576 containerd[1467]: time="2025-02-13T19:43:13.967569320Z" level=info msg="StopPodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:43:13.967832 containerd[1467]: time="2025-02-13T19:43:13.967802499Z" level=info msg="RemovePodSandbox for \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:43:13.967870 containerd[1467]: time="2025-02-13T19:43:13.967833869Z" level=info msg="Forcibly stopping sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\"" Feb 13 19:43:13.967957 containerd[1467]: time="2025-02-13T19:43:13.967908382Z" level=info msg="TearDown network for sandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" successfully" Feb 13 19:43:13.971355 containerd[1467]: time="2025-02-13T19:43:13.971328700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.971439 containerd[1467]: time="2025-02-13T19:43:13.971371032Z" level=info msg="RemovePodSandbox \"ea7ad1b23abc6667f4563f79978cf671a336c2e70631836fac2ee65439eeb9b4\" returns successfully" Feb 13 19:43:13.971641 containerd[1467]: time="2025-02-13T19:43:13.971599282Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:43:13.971717 containerd[1467]: time="2025-02-13T19:43:13.971699365Z" level=info msg="TearDown network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" successfully" Feb 13 19:43:13.971746 containerd[1467]: time="2025-02-13T19:43:13.971717029Z" level=info msg="StopPodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" returns successfully" Feb 13 19:43:13.972009 containerd[1467]: time="2025-02-13T19:43:13.971984243Z" level=info msg="RemovePodSandbox for \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:43:13.972009 containerd[1467]: time="2025-02-13T19:43:13.972005915Z" level=info msg="Forcibly stopping sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\"" Feb 13 19:43:13.972097 containerd[1467]: time="2025-02-13T19:43:13.972077132Z" level=info msg="TearDown network for sandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" successfully" Feb 13 19:43:13.975453 containerd[1467]: time="2025-02-13T19:43:13.975421514Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.975489 containerd[1467]: time="2025-02-13T19:43:13.975455509Z" level=info msg="RemovePodSandbox \"0788c7c54d6ffc8b054e92393050c76b7df05c1b22dd254befed893f0acbfdfb\" returns successfully" Feb 13 19:43:13.975728 containerd[1467]: time="2025-02-13T19:43:13.975693848Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" Feb 13 19:43:13.975813 containerd[1467]: time="2025-02-13T19:43:13.975792829Z" level=info msg="TearDown network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" successfully" Feb 13 19:43:13.975813 containerd[1467]: time="2025-02-13T19:43:13.975807668Z" level=info msg="StopPodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" returns successfully" Feb 13 19:43:13.976113 containerd[1467]: time="2025-02-13T19:43:13.976090903Z" level=info msg="RemovePodSandbox for \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" Feb 13 19:43:13.976113 containerd[1467]: time="2025-02-13T19:43:13.976111683Z" level=info msg="Forcibly stopping sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\"" Feb 13 19:43:13.976201 containerd[1467]: time="2025-02-13T19:43:13.976179865Z" level=info msg="TearDown network for sandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" successfully" Feb 13 19:43:13.979613 containerd[1467]: time="2025-02-13T19:43:13.979588259Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.979675 containerd[1467]: time="2025-02-13T19:43:13.979623197Z" level=info msg="RemovePodSandbox \"3afeeeebc3a2e8bb197bd69bc00bf8abb4465ec0df9f830d66d71042bec6e15f\" returns successfully" Feb 13 19:43:13.980088 containerd[1467]: time="2025-02-13T19:43:13.979881063Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\"" Feb 13 19:43:13.980088 containerd[1467]: time="2025-02-13T19:43:13.980018999Z" level=info msg="TearDown network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" successfully" Feb 13 19:43:13.980088 containerd[1467]: time="2025-02-13T19:43:13.980031543Z" level=info msg="StopPodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" returns successfully" Feb 13 19:43:13.980331 containerd[1467]: time="2025-02-13T19:43:13.980306542Z" level=info msg="RemovePodSandbox for \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\"" Feb 13 19:43:13.980331 containerd[1467]: time="2025-02-13T19:43:13.980326401Z" level=info msg="Forcibly stopping sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\"" Feb 13 19:43:13.980418 containerd[1467]: time="2025-02-13T19:43:13.980389803Z" level=info msg="TearDown network for sandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" successfully" Feb 13 19:43:13.983676 containerd[1467]: time="2025-02-13T19:43:13.983652778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.983733 containerd[1467]: time="2025-02-13T19:43:13.983695731Z" level=info msg="RemovePodSandbox \"db3c78417459407092f7d7162890f98e4c33c08a46986bc8710e3bf57515370d\" returns successfully" Feb 13 19:43:13.984006 containerd[1467]: time="2025-02-13T19:43:13.983979076Z" level=info msg="StopPodSandbox for \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\"" Feb 13 19:43:13.984090 containerd[1467]: time="2025-02-13T19:43:13.984073208Z" level=info msg="TearDown network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" successfully" Feb 13 19:43:13.984090 containerd[1467]: time="2025-02-13T19:43:13.984087546Z" level=info msg="StopPodSandbox for \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" returns successfully" Feb 13 19:43:13.984393 containerd[1467]: time="2025-02-13T19:43:13.984347506Z" level=info msg="RemovePodSandbox for \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\"" Feb 13 19:43:13.984393 containerd[1467]: time="2025-02-13T19:43:13.984372234Z" level=info msg="Forcibly stopping sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\"" Feb 13 19:43:13.984571 containerd[1467]: time="2025-02-13T19:43:13.984437880Z" level=info msg="TearDown network for sandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" successfully" Feb 13 19:43:13.987745 containerd[1467]: time="2025-02-13T19:43:13.987719741Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.987815 containerd[1467]: time="2025-02-13T19:43:13.987753576Z" level=info msg="RemovePodSandbox \"38047ce7044f40ec719558d90bc15eec9e71b1b0be98fa0bcbf26abe164fe308\" returns successfully" Feb 13 19:43:13.988048 containerd[1467]: time="2025-02-13T19:43:13.988006454Z" level=info msg="StopPodSandbox for \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\"" Feb 13 19:43:13.988138 containerd[1467]: time="2025-02-13T19:43:13.988101888Z" level=info msg="TearDown network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\" successfully" Feb 13 19:43:13.988138 containerd[1467]: time="2025-02-13T19:43:13.988119632Z" level=info msg="StopPodSandbox for \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\" returns successfully" Feb 13 19:43:13.988382 containerd[1467]: time="2025-02-13T19:43:13.988346679Z" level=info msg="RemovePodSandbox for \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\"" Feb 13 19:43:13.988382 containerd[1467]: time="2025-02-13T19:43:13.988367709Z" level=info msg="Forcibly stopping sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\"" Feb 13 19:43:13.988467 containerd[1467]: time="2025-02-13T19:43:13.988432765Z" level=info msg="TearDown network for sandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\" successfully" Feb 13 19:43:13.991940 containerd[1467]: time="2025-02-13T19:43:13.991901145Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 19:43:13.992017 containerd[1467]: time="2025-02-13T19:43:13.991945691Z" level=info msg="RemovePodSandbox \"a1eb87b46d5d1d0d1ce2163262bbe5eede24f5069548e462f409c473fafe386d\" returns successfully" Feb 13 19:43:14.559140 kubelet[2656]: I0213 19:43:14.559091 2656 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:43:17.291877 systemd[1]: Started sshd@21-10.0.0.106:22-10.0.0.1:33400.service - OpenSSH per-connection server daemon (10.0.0.1:33400). Feb 13 19:43:17.327646 sshd[6190]: Accepted publickey for core from 10.0.0.1 port 33400 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:17.330063 sshd-session[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:17.334557 systemd-logind[1449]: New session 22 of user core. Feb 13 19:43:17.340625 systemd[1]: Started session-22.scope - Session 22 of User core. Feb 13 19:43:17.449462 sshd[6192]: Connection closed by 10.0.0.1 port 33400 Feb 13 19:43:17.449868 sshd-session[6190]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:17.453800 systemd[1]: sshd@21-10.0.0.106:22-10.0.0.1:33400.service: Deactivated successfully. Feb 13 19:43:17.456002 systemd[1]: session-22.scope: Deactivated successfully. Feb 13 19:43:17.456730 systemd-logind[1449]: Session 22 logged out. Waiting for processes to exit. Feb 13 19:43:17.457587 systemd-logind[1449]: Removed session 22. Feb 13 19:43:20.134142 kubelet[2656]: E0213 19:43:20.134060 2656 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Feb 13 19:43:22.462191 systemd[1]: Started sshd@22-10.0.0.106:22-10.0.0.1:33414.service - OpenSSH per-connection server daemon (10.0.0.1:33414). Feb 13 19:43:22.508432 sshd[6253]: Accepted publickey for core from 10.0.0.1 port 33414 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:22.510136 sshd-session[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:22.514082 systemd-logind[1449]: New session 23 of user core. Feb 13 19:43:22.521643 systemd[1]: Started session-23.scope - Session 23 of User core. Feb 13 19:43:22.629938 sshd[6255]: Connection closed by 10.0.0.1 port 33414 Feb 13 19:43:22.630284 sshd-session[6253]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:22.633665 systemd[1]: sshd@22-10.0.0.106:22-10.0.0.1:33414.service: Deactivated successfully. Feb 13 19:43:22.635612 systemd[1]: session-23.scope: Deactivated successfully. Feb 13 19:43:22.636263 systemd-logind[1449]: Session 23 logged out. Waiting for processes to exit. Feb 13 19:43:22.637104 systemd-logind[1449]: Removed session 23. Feb 13 19:43:27.646431 systemd[1]: Started sshd@23-10.0.0.106:22-10.0.0.1:54232.service - OpenSSH per-connection server daemon (10.0.0.1:54232). Feb 13 19:43:27.688136 sshd[6287]: Accepted publickey for core from 10.0.0.1 port 54232 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:27.689774 sshd-session[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:27.693726 systemd-logind[1449]: New session 24 of user core. Feb 13 19:43:27.703647 systemd[1]: Started session-24.scope - Session 24 of User core. Feb 13 19:43:27.816914 sshd[6289]: Connection closed by 10.0.0.1 port 54232 Feb 13 19:43:27.817256 sshd-session[6287]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:27.820522 systemd[1]: sshd@23-10.0.0.106:22-10.0.0.1:54232.service: Deactivated successfully. Feb 13 19:43:27.822727 systemd[1]: session-24.scope: Deactivated successfully. Feb 13 19:43:27.824561 systemd-logind[1449]: Session 24 logged out. Waiting for processes to exit. Feb 13 19:43:27.825401 systemd-logind[1449]: Removed session 24. Feb 13 19:43:32.828541 systemd[1]: Started sshd@24-10.0.0.106:22-10.0.0.1:54238.service - OpenSSH per-connection server daemon (10.0.0.1:54238). Feb 13 19:43:32.867608 sshd[6302]: Accepted publickey for core from 10.0.0.1 port 54238 ssh2: RSA SHA256:M1JiL2vDeW2xAdjMAPzmPoLJm0DJz/C0inZYUswlgoA Feb 13 19:43:32.869321 sshd-session[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:43:32.873252 systemd-logind[1449]: New session 25 of user core. Feb 13 19:43:32.884643 systemd[1]: Started session-25.scope - Session 25 of User core. Feb 13 19:43:33.027280 sshd[6304]: Connection closed by 10.0.0.1 port 54238 Feb 13 19:43:33.027698 sshd-session[6302]: pam_unix(sshd:session): session closed for user core Feb 13 19:43:33.032083 systemd[1]: sshd@24-10.0.0.106:22-10.0.0.1:54238.service: Deactivated successfully. Feb 13 19:43:33.034424 systemd[1]: session-25.scope: Deactivated successfully. Feb 13 19:43:33.035139 systemd-logind[1449]: Session 25 logged out. Waiting for processes to exit. Feb 13 19:43:33.036015 systemd-logind[1449]: Removed session 25.